2005-04-17 02:20:36 +04:00
|
|
|
/*
|
|
|
|
* linux/fs/proc/base.c
|
|
|
|
*
|
|
|
|
* Copyright (C) 1991, 1992 Linus Torvalds
|
|
|
|
*
|
|
|
|
* proc base directory handling functions
|
|
|
|
*
|
|
|
|
* 1999, Al Viro. Rewritten. Now it covers the whole per-process part.
|
|
|
|
* Instead of using magical inumbers to determine the kind of object
|
|
|
|
* we allocate and fill in-core inodes upon lookup. They don't even
|
|
|
|
* go into icache. We cache the reference to task_struct upon lookup too.
|
|
|
|
* Eventually it should become a filesystem in its own. We don't use the
|
|
|
|
* rest of procfs anymore.
|
2005-09-04 02:55:10 +04:00
|
|
|
*
|
|
|
|
*
|
|
|
|
* Changelog:
|
|
|
|
* 17-Jan-2005
|
|
|
|
* Allan Bezerra
|
|
|
|
* Bruna Moreira <bruna.moreira@indt.org.br>
|
|
|
|
* Edjard Mota <edjard.mota@indt.org.br>
|
|
|
|
* Ilias Biris <ilias.biris@indt.org.br>
|
|
|
|
* Mauricio Lin <mauricio.lin@indt.org.br>
|
|
|
|
*
|
|
|
|
* Embedded Linux Lab - 10LE Instituto Nokia de Tecnologia - INdT
|
|
|
|
*
|
|
|
|
* A new process specific entry (smaps) included in /proc. It shows the
|
|
|
|
* size of rss for each memory area. The maps entry lacks information
|
|
|
|
* about physical memory size (rss) for each mapped file, i.e.,
|
|
|
|
* rss information for executables and library files.
|
|
|
|
* This additional information is useful for any tools that need to know
|
|
|
|
* about physical memory consumption for a process specific library.
|
|
|
|
*
|
|
|
|
* Changelog:
|
|
|
|
* 21-Feb-2005
|
|
|
|
* Embedded Linux Lab - 10LE Instituto Nokia de Tecnologia - INdT
|
|
|
|
* Pud inclusion in the page table walking.
|
|
|
|
*
|
|
|
|
* ChangeLog:
|
|
|
|
* 10-Mar-2005
|
|
|
|
* 10LE Instituto Nokia de Tecnologia - INdT:
|
|
|
|
* A better way to walks through the page table as suggested by Hugh Dickins.
|
|
|
|
*
|
|
|
|
* Simo Piiroinen <simo.piiroinen@nokia.com>:
|
|
|
|
* Smaps information related to shared, private, clean and dirty pages.
|
|
|
|
*
|
|
|
|
* Paul Mundt <paul.mundt@nokia.com>:
|
|
|
|
* Overall revision about smaps.
|
2005-04-17 02:20:36 +04:00
|
|
|
*/
|
|
|
|
|
|
|
|
#include <asm/uaccess.h>
|
|
|
|
|
|
|
|
#include <linux/errno.h>
|
|
|
|
#include <linux/time.h>
|
|
|
|
#include <linux/proc_fs.h>
|
|
|
|
#include <linux/stat.h>
|
2008-07-27 19:29:15 +04:00
|
|
|
#include <linux/task_io_accounting_ops.h>
|
2005-04-17 02:20:36 +04:00
|
|
|
#include <linux/init.h>
|
2006-01-11 23:17:46 +03:00
|
|
|
#include <linux/capability.h>
|
2005-04-17 02:20:36 +04:00
|
|
|
#include <linux/file.h>
|
2008-04-24 15:44:08 +04:00
|
|
|
#include <linux/fdtable.h>
|
2005-04-17 02:20:36 +04:00
|
|
|
#include <linux/string.h>
|
|
|
|
#include <linux/seq_file.h>
|
|
|
|
#include <linux/namei.h>
|
2006-12-08 13:37:56 +03:00
|
|
|
#include <linux/mnt_namespace.h>
|
2005-04-17 02:20:36 +04:00
|
|
|
#include <linux/mm.h>
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
|
|
|
#include <linux/swap.h>
|
2005-09-10 00:04:14 +04:00
|
|
|
#include <linux/rcupdate.h>
|
2005-04-17 02:20:36 +04:00
|
|
|
#include <linux/kallsyms.h>
|
2008-11-10 11:26:08 +03:00
|
|
|
#include <linux/stacktrace.h>
|
2007-10-19 10:40:37 +04:00
|
|
|
#include <linux/resource.h>
|
2007-05-08 11:26:04 +04:00
|
|
|
#include <linux/module.h>
|
2005-04-17 02:20:36 +04:00
|
|
|
#include <linux/mount.h>
|
|
|
|
#include <linux/security.h>
|
|
|
|
#include <linux/ptrace.h>
|
2008-07-26 06:45:49 +04:00
|
|
|
#include <linux/tracehook.h>
|
2013-02-28 05:03:16 +04:00
|
|
|
#include <linux/printk.h>
|
2007-10-19 10:39:35 +04:00
|
|
|
#include <linux/cgroup.h>
|
2005-04-17 02:20:36 +04:00
|
|
|
#include <linux/cpuset.h>
|
|
|
|
#include <linux/audit.h>
|
2005-11-08 01:15:49 +03:00
|
|
|
#include <linux/poll.h>
|
2006-10-02 13:18:08 +04:00
|
|
|
#include <linux/nsproxy.h>
|
2006-10-20 10:28:32 +04:00
|
|
|
#include <linux/oom.h>
|
2007-07-19 12:48:28 +04:00
|
|
|
#include <linux/elf.h>
|
2007-10-19 10:40:03 +04:00
|
|
|
#include <linux/pid_namespace.h>
|
2011-11-17 12:11:58 +04:00
|
|
|
#include <linux/user_namespace.h>
|
2009-03-30 03:50:06 +04:00
|
|
|
#include <linux/fs_struct.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 11:04:11 +03:00
|
|
|
#include <linux/slab.h>
|
2012-01-11 03:11:23 +04:00
|
|
|
#include <linux/flex_array.h>
|
2013-03-11 13:12:45 +04:00
|
|
|
#include <linux/posix-timers.h>
|
arch/tile: more /proc and /sys file support
This change introduces a few of the less controversial /proc and
/proc/sys interfaces for tile, along with sysfs attributes for
various things that were originally proposed as /proc/tile files.
It also adjusts the "hardwall" proc API.
Arnd Bergmann reviewed the initial arch/tile submission, which
included a complete set of all the /proc/tile and /proc/sys/tile
knobs that we had added in a somewhat ad hoc way during initial
development, and provided feedback on where most of them should go.
One knob turned out to be similar enough to the existing
/proc/sys/debug/exception-trace that it was re-implemented to use
that model instead.
Another knob was /proc/tile/grid, which reported the "grid" dimensions
of a tile chip (e.g. 8x8 processors = 64-core chip). Arnd suggested
looking at sysfs for that, so this change moves that information
to a pair of sysfs attributes (chip_width and chip_height) in the
/sys/devices/system/cpu directory. We also put the "chip_serial"
and "chip_revision" information from our old /proc/tile/board file
as attributes in /sys/devices/system/cpu.
Other information collected via hypervisor APIs is now placed in
/sys/hypervisor. We create a /sys/hypervisor/type file (holding the
constant string "tilera") to be parallel with the Xen use of
/sys/hypervisor/type holding "xen". We create three top-level files,
"version" (the hypervisor's own version), "config_version" (the
version of the configuration file), and "hvconfig" (the contents of
the configuration file). The remaining information from our old
/proc/tile/board and /proc/tile/switch files becomes an attribute
group appearing under /sys/hypervisor/board/.
Finally, after some feedback from Arnd Bergmann for the previous
version of this patch, the /proc/tile/hardwall file is split up into
two conceptual parts. First, a directory /proc/tile/hardwall/ which
contains one file per active hardwall, each file named after the
hardwall's ID and holding a cpulist that says which cpus are enclosed by
the hardwall. Second, a /proc/PID file "hardwall" that is either
empty (for non-hardwall-using processes) or contains the hardwall ID.
Finally, this change pushes the /proc/sys/tile/unaligned_fixup/
directory, with knobs controlling the kernel code for handling the
fixup of unaligned exceptions.
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
2011-05-26 20:40:09 +04:00
|
|
|
#ifdef CONFIG_HARDWALL
|
|
|
|
#include <asm/hardwall.h>
|
|
|
|
#endif
|
2012-01-11 03:08:09 +04:00
|
|
|
#include <trace/events/oom.h>
|
2005-04-17 02:20:36 +04:00
|
|
|
#include "internal.h"
|
2012-08-23 14:43:24 +04:00
|
|
|
#include "fd.h"
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-06-26 11:25:46 +04:00
|
|
|
/* NOTE:
|
|
|
|
* Implementing inode permission operations in /proc is almost
|
|
|
|
* certainly an error. Permission checks need to happen during
|
|
|
|
* each system call not at open time. The reason is that most of
|
|
|
|
* what we wish to check for permissions in /proc varies at runtime.
|
|
|
|
*
|
|
|
|
* The classic example of a problem is opening file descriptors
|
|
|
|
* in /proc for a task before it execs a suid executable.
|
|
|
|
*/
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
struct pid_entry {
|
2014-08-09 01:21:33 +04:00
|
|
|
const char *name;
|
2007-05-08 11:26:15 +04:00
|
|
|
int len;
|
2011-07-24 11:36:29 +04:00
|
|
|
umode_t mode;
|
2007-02-12 11:55:40 +03:00
|
|
|
const struct inode_operations *iop;
|
2007-02-12 11:55:34 +03:00
|
|
|
const struct file_operations *fop;
|
2006-10-02 13:17:07 +04:00
|
|
|
union proc_op op;
|
2005-04-17 02:20:36 +04:00
|
|
|
};
|
|
|
|
|
2006-10-02 13:18:49 +04:00
|
|
|
#define NOD(NAME, MODE, IOP, FOP, OP) { \
|
2006-10-02 13:17:07 +04:00
|
|
|
.name = (NAME), \
|
2007-05-08 11:26:15 +04:00
|
|
|
.len = sizeof(NAME) - 1, \
|
2006-10-02 13:17:07 +04:00
|
|
|
.mode = MODE, \
|
|
|
|
.iop = IOP, \
|
|
|
|
.fop = FOP, \
|
|
|
|
.op = OP, \
|
|
|
|
}
|
|
|
|
|
2008-11-10 01:32:52 +03:00
|
|
|
#define DIR(NAME, MODE, iops, fops) \
|
|
|
|
NOD(NAME, (S_IFDIR|(MODE)), &iops, &fops, {} )
|
|
|
|
#define LNK(NAME, get_link) \
|
2006-10-02 13:18:49 +04:00
|
|
|
NOD(NAME, (S_IFLNK|S_IRWXUGO), \
|
2006-10-02 13:17:07 +04:00
|
|
|
&proc_pid_link_inode_operations, NULL, \
|
2008-11-10 01:32:52 +03:00
|
|
|
{ .proc_get_link = get_link } )
|
|
|
|
#define REG(NAME, MODE, fops) \
|
|
|
|
NOD(NAME, (S_IFREG|(MODE)), NULL, &fops, {})
|
|
|
|
#define ONE(NAME, MODE, show) \
|
2008-02-08 15:18:30 +03:00
|
|
|
NOD(NAME, (S_IFREG|(MODE)), \
|
|
|
|
NULL, &proc_single_file_operations, \
|
2008-11-10 01:32:52 +03:00
|
|
|
{ .proc_show = show } )
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2008-06-06 09:46:53 +04:00
|
|
|
/*
|
|
|
|
* Count the number of hardlinks for the pid_entry table, excluding the .
|
|
|
|
* and .. links.
|
|
|
|
*/
|
|
|
|
static unsigned int pid_entry_count_dirs(const struct pid_entry *entries,
|
|
|
|
unsigned int n)
|
|
|
|
{
|
|
|
|
unsigned int i;
|
|
|
|
unsigned int count;
|
|
|
|
|
|
|
|
count = 0;
|
|
|
|
for (i = 0; i < n; ++i) {
|
|
|
|
if (S_ISDIR(entries[i].mode))
|
|
|
|
++count;
|
|
|
|
}
|
|
|
|
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
2010-08-10 13:41:36 +04:00
|
|
|
static int get_task_root(struct task_struct *task, struct path *root)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2009-03-29 03:21:27 +04:00
|
|
|
int result = -ENOENT;
|
|
|
|
|
2005-09-07 02:18:22 +04:00
|
|
|
task_lock(task);
|
2010-08-10 13:41:36 +04:00
|
|
|
if (task->fs) {
|
|
|
|
get_fs_root(task->fs, root);
|
2009-03-29 03:21:27 +04:00
|
|
|
result = 0;
|
|
|
|
}
|
2005-09-07 02:18:22 +04:00
|
|
|
task_unlock(task);
|
2009-03-29 03:21:27 +04:00
|
|
|
return result;
|
2005-09-07 02:18:22 +04:00
|
|
|
}
|
|
|
|
|
2012-01-11 03:11:20 +04:00
|
|
|
static int proc_cwd_link(struct dentry *dentry, struct path *path)
|
2005-09-07 02:18:22 +04:00
|
|
|
{
|
2012-01-11 03:11:20 +04:00
|
|
|
struct task_struct *task = get_proc_task(dentry->d_inode);
|
2005-09-07 02:18:22 +04:00
|
|
|
int result = -ENOENT;
|
2006-06-26 11:25:55 +04:00
|
|
|
|
|
|
|
if (task) {
|
2010-08-10 13:41:36 +04:00
|
|
|
task_lock(task);
|
|
|
|
if (task->fs) {
|
|
|
|
get_fs_pwd(task->fs, path);
|
|
|
|
result = 0;
|
|
|
|
}
|
|
|
|
task_unlock(task);
|
2006-06-26 11:25:55 +04:00
|
|
|
put_task_struct(task);
|
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
2012-01-11 03:11:20 +04:00
|
|
|
static int proc_root_link(struct dentry *dentry, struct path *path)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2012-01-11 03:11:20 +04:00
|
|
|
struct task_struct *task = get_proc_task(dentry->d_inode);
|
2005-04-17 02:20:36 +04:00
|
|
|
int result = -ENOENT;
|
2006-06-26 11:25:55 +04:00
|
|
|
|
|
|
|
if (task) {
|
2010-08-10 13:41:36 +04:00
|
|
|
result = get_task_root(task, path);
|
2006-06-26 11:25:55 +04:00
|
|
|
put_task_struct(task);
|
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
2014-08-09 01:21:41 +04:00
|
|
|
static int proc_pid_cmdline(struct seq_file *m, struct pid_namespace *ns,
|
|
|
|
struct pid *pid, struct task_struct *task)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2014-08-09 01:21:41 +04:00
|
|
|
/*
|
|
|
|
* Rely on struct seq_operations::show() being called once
|
|
|
|
* per internal buffer allocation. See single_open(), traverse().
|
|
|
|
*/
|
|
|
|
BUG_ON(m->size < PAGE_SIZE);
|
|
|
|
m->count += get_cmdline(task, m->buf, PAGE_SIZE);
|
|
|
|
return 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2014-08-09 01:21:35 +04:00
|
|
|
static int proc_pid_auxv(struct seq_file *m, struct pid_namespace *ns,
|
|
|
|
struct pid *pid, struct task_struct *task)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2012-06-01 03:26:17 +04:00
|
|
|
struct mm_struct *mm = mm_access(task, PTRACE_MODE_READ);
|
2011-02-16 06:52:11 +03:00
|
|
|
if (mm && !IS_ERR(mm)) {
|
2005-04-17 02:20:36 +04:00
|
|
|
unsigned int nwords = 0;
|
2008-12-30 18:49:13 +03:00
|
|
|
do {
|
2005-04-17 02:20:36 +04:00
|
|
|
nwords += 2;
|
2008-12-30 18:49:13 +03:00
|
|
|
} while (mm->saved_auxv[nwords - 2] != 0); /* AT_NULL */
|
2014-08-09 01:21:35 +04:00
|
|
|
seq_write(m, mm->saved_auxv, nwords * sizeof(mm->saved_auxv[0]));
|
2005-04-17 02:20:36 +04:00
|
|
|
mmput(mm);
|
2014-08-09 01:21:35 +04:00
|
|
|
return 0;
|
|
|
|
} else
|
|
|
|
return PTR_ERR(mm);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#ifdef CONFIG_KALLSYMS
|
|
|
|
/*
|
|
|
|
* Provides a wchan file via kallsyms in a proper one-value-per-file format.
|
|
|
|
* Returns the resolved symbol. If that fails, simply return the address.
|
|
|
|
*/
|
2014-08-09 01:21:44 +04:00
|
|
|
static int proc_pid_wchan(struct seq_file *m, struct pid_namespace *ns,
|
|
|
|
struct pid *pid, struct task_struct *task)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2007-05-08 11:28:41 +04:00
|
|
|
unsigned long wchan;
|
2007-07-17 15:03:51 +04:00
|
|
|
char symname[KSYM_NAME_LEN];
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
wchan = get_wchan(task);
|
|
|
|
|
2007-05-08 11:28:43 +04:00
|
|
|
if (lookup_symbol_name(wchan, symname) < 0)
|
2009-05-04 22:51:14 +04:00
|
|
|
if (!ptrace_may_access(task, PTRACE_MODE_READ))
|
|
|
|
return 0;
|
|
|
|
else
|
2014-08-09 01:21:44 +04:00
|
|
|
return seq_printf(m, "%lu", wchan);
|
2007-05-08 11:28:43 +04:00
|
|
|
else
|
2014-08-09 01:21:44 +04:00
|
|
|
return seq_printf(m, "%s", symname);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
#endif /* CONFIG_KALLSYMS */
|
|
|
|
|
2011-03-23 22:52:50 +03:00
|
|
|
static int lock_trace(struct task_struct *task)
|
|
|
|
{
|
|
|
|
int err = mutex_lock_killable(&task->signal->cred_guard_mutex);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
if (!ptrace_may_access(task, PTRACE_MODE_ATTACH)) {
|
|
|
|
mutex_unlock(&task->signal->cred_guard_mutex);
|
|
|
|
return -EPERM;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void unlock_trace(struct task_struct *task)
|
|
|
|
{
|
|
|
|
mutex_unlock(&task->signal->cred_guard_mutex);
|
|
|
|
}
|
|
|
|
|
2008-11-10 11:26:08 +03:00
|
|
|
#ifdef CONFIG_STACKTRACE
|
|
|
|
|
|
|
|
#define MAX_STACK_TRACE_DEPTH 64
|
|
|
|
|
|
|
|
static int proc_pid_stack(struct seq_file *m, struct pid_namespace *ns,
|
|
|
|
struct pid *pid, struct task_struct *task)
|
|
|
|
{
|
|
|
|
struct stack_trace trace;
|
|
|
|
unsigned long *entries;
|
2011-03-23 22:52:50 +03:00
|
|
|
int err;
|
2008-11-10 11:26:08 +03:00
|
|
|
int i;
|
|
|
|
|
|
|
|
entries = kmalloc(MAX_STACK_TRACE_DEPTH * sizeof(*entries), GFP_KERNEL);
|
|
|
|
if (!entries)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
trace.nr_entries = 0;
|
|
|
|
trace.max_entries = MAX_STACK_TRACE_DEPTH;
|
|
|
|
trace.entries = entries;
|
|
|
|
trace.skip = 0;
|
|
|
|
|
2011-03-23 22:52:50 +03:00
|
|
|
err = lock_trace(task);
|
|
|
|
if (!err) {
|
|
|
|
save_stack_trace_tsk(task, &trace);
|
|
|
|
|
|
|
|
for (i = 0; i < trace.nr_entries; i++) {
|
2011-03-24 06:51:42 +03:00
|
|
|
seq_printf(m, "[<%pK>] %pS\n",
|
2011-03-23 22:52:50 +03:00
|
|
|
(void *)entries[i], (void *)entries[i]);
|
|
|
|
}
|
|
|
|
unlock_trace(task);
|
2008-11-10 11:26:08 +03:00
|
|
|
}
|
|
|
|
kfree(entries);
|
|
|
|
|
2011-03-23 22:52:50 +03:00
|
|
|
return err;
|
2008-11-10 11:26:08 +03:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
#ifdef CONFIG_SCHEDSTATS
|
|
|
|
/*
|
|
|
|
* Provides /proc/PID/schedstat
|
|
|
|
*/
|
2014-08-09 01:21:46 +04:00
|
|
|
static int proc_pid_schedstat(struct seq_file *m, struct pid_namespace *ns,
|
|
|
|
struct pid *pid, struct task_struct *task)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2014-08-09 01:21:46 +04:00
|
|
|
return seq_printf(m, "%llu %llu %lu\n",
|
2008-12-22 09:37:41 +03:00
|
|
|
(unsigned long long)task->se.sum_exec_runtime,
|
|
|
|
(unsigned long long)task->sched_info.run_delay,
|
2007-10-15 19:00:12 +04:00
|
|
|
task->sched_info.pcount);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2008-01-25 23:08:34 +03:00
|
|
|
#ifdef CONFIG_LATENCYTOP
|
|
|
|
static int lstats_show_proc(struct seq_file *m, void *v)
|
|
|
|
{
|
|
|
|
int i;
|
2008-02-21 03:53:29 +03:00
|
|
|
struct inode *inode = m->private;
|
|
|
|
struct task_struct *task = get_proc_task(inode);
|
2008-01-25 23:08:34 +03:00
|
|
|
|
2008-02-21 03:53:29 +03:00
|
|
|
if (!task)
|
|
|
|
return -ESRCH;
|
|
|
|
seq_puts(m, "Latency Top version : v0.1\n");
|
2008-01-25 23:08:34 +03:00
|
|
|
for (i = 0; i < 32; i++) {
|
2011-01-13 04:00:30 +03:00
|
|
|
struct latency_record *lr = &task->latency_record[i];
|
|
|
|
if (lr->backtrace[0]) {
|
2008-01-25 23:08:34 +03:00
|
|
|
int q;
|
2011-01-13 04:00:30 +03:00
|
|
|
seq_printf(m, "%i %li %li",
|
|
|
|
lr->count, lr->time, lr->max);
|
2008-01-25 23:08:34 +03:00
|
|
|
for (q = 0; q < LT_BACKTRACEDEPTH; q++) {
|
2011-01-13 04:00:30 +03:00
|
|
|
unsigned long bt = lr->backtrace[q];
|
|
|
|
if (!bt)
|
2008-01-25 23:08:34 +03:00
|
|
|
break;
|
2011-01-13 04:00:30 +03:00
|
|
|
if (bt == ULONG_MAX)
|
2008-01-25 23:08:34 +03:00
|
|
|
break;
|
2011-01-13 04:00:30 +03:00
|
|
|
seq_printf(m, " %ps", (void *)bt);
|
2008-01-25 23:08:34 +03:00
|
|
|
}
|
2011-01-13 04:00:32 +03:00
|
|
|
seq_putc(m, '\n');
|
2008-01-25 23:08:34 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
}
|
2008-02-21 03:53:29 +03:00
|
|
|
put_task_struct(task);
|
2008-01-25 23:08:34 +03:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int lstats_open(struct inode *inode, struct file *file)
|
|
|
|
{
|
2008-02-21 03:53:29 +03:00
|
|
|
return single_open(file, lstats_show_proc, inode);
|
2008-02-14 21:27:00 +03:00
|
|
|
}
|
|
|
|
|
2008-01-25 23:08:34 +03:00
|
|
|
static ssize_t lstats_write(struct file *file, const char __user *buf,
|
|
|
|
size_t count, loff_t *offs)
|
|
|
|
{
|
2013-01-24 02:07:38 +04:00
|
|
|
struct task_struct *task = get_proc_task(file_inode(file));
|
2008-01-25 23:08:34 +03:00
|
|
|
|
2008-02-21 03:53:29 +03:00
|
|
|
if (!task)
|
|
|
|
return -ESRCH;
|
2008-01-25 23:08:34 +03:00
|
|
|
clear_all_latency_tracing(task);
|
2008-02-21 03:53:29 +03:00
|
|
|
put_task_struct(task);
|
2008-01-25 23:08:34 +03:00
|
|
|
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct file_operations proc_lstats_operations = {
|
|
|
|
.open = lstats_open,
|
|
|
|
.read = seq_read,
|
|
|
|
.write = lstats_write,
|
|
|
|
.llseek = seq_lseek,
|
2008-02-21 03:53:29 +03:00
|
|
|
.release = single_release,
|
2008-01-25 23:08:34 +03:00
|
|
|
};
|
|
|
|
|
|
|
|
#endif
|
|
|
|
|
2014-08-09 01:21:48 +04:00
|
|
|
static int proc_oom_score(struct seq_file *m, struct pid_namespace *ns,
|
|
|
|
struct pid *pid, struct task_struct *task)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2012-05-30 02:06:47 +04:00
|
|
|
unsigned long totalpages = totalram_pages + total_swap_pages;
|
2010-04-01 17:13:57 +04:00
|
|
|
unsigned long points = 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2007-05-08 11:26:46 +04:00
|
|
|
read_lock(&tasklist_lock);
|
2010-04-01 17:13:57 +04:00
|
|
|
if (pid_alive(task))
|
2012-05-30 02:06:47 +04:00
|
|
|
points = oom_badness(task, NULL, NULL, totalpages) *
|
|
|
|
1000 / totalpages;
|
2007-05-08 11:26:46 +04:00
|
|
|
read_unlock(&tasklist_lock);
|
2014-08-09 01:21:48 +04:00
|
|
|
return seq_printf(m, "%lu\n", points);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2007-10-19 10:40:37 +04:00
|
|
|
struct limit_names {
|
2014-08-09 01:21:33 +04:00
|
|
|
const char *name;
|
|
|
|
const char *unit;
|
2007-10-19 10:40:37 +04:00
|
|
|
};
|
|
|
|
|
|
|
|
static const struct limit_names lnames[RLIM_NLIMITS] = {
|
2009-09-23 03:45:32 +04:00
|
|
|
[RLIMIT_CPU] = {"Max cpu time", "seconds"},
|
2007-10-19 10:40:37 +04:00
|
|
|
[RLIMIT_FSIZE] = {"Max file size", "bytes"},
|
|
|
|
[RLIMIT_DATA] = {"Max data size", "bytes"},
|
|
|
|
[RLIMIT_STACK] = {"Max stack size", "bytes"},
|
|
|
|
[RLIMIT_CORE] = {"Max core file size", "bytes"},
|
|
|
|
[RLIMIT_RSS] = {"Max resident set", "bytes"},
|
|
|
|
[RLIMIT_NPROC] = {"Max processes", "processes"},
|
|
|
|
[RLIMIT_NOFILE] = {"Max open files", "files"},
|
|
|
|
[RLIMIT_MEMLOCK] = {"Max locked memory", "bytes"},
|
|
|
|
[RLIMIT_AS] = {"Max address space", "bytes"},
|
|
|
|
[RLIMIT_LOCKS] = {"Max file locks", "locks"},
|
|
|
|
[RLIMIT_SIGPENDING] = {"Max pending signals", "signals"},
|
|
|
|
[RLIMIT_MSGQUEUE] = {"Max msgqueue size", "bytes"},
|
|
|
|
[RLIMIT_NICE] = {"Max nice priority", NULL},
|
|
|
|
[RLIMIT_RTPRIO] = {"Max realtime priority", NULL},
|
2008-02-24 02:23:52 +03:00
|
|
|
[RLIMIT_RTTIME] = {"Max realtime timeout", "us"},
|
2007-10-19 10:40:37 +04:00
|
|
|
};
|
|
|
|
|
|
|
|
/* Display limits for a process */
|
2014-08-09 01:21:37 +04:00
|
|
|
static int proc_pid_limits(struct seq_file *m, struct pid_namespace *ns,
|
|
|
|
struct pid *pid, struct task_struct *task)
|
2007-10-19 10:40:37 +04:00
|
|
|
{
|
|
|
|
unsigned int i;
|
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
struct rlimit rlim[RLIM_NLIMITS];
|
|
|
|
|
2008-10-05 00:51:15 +04:00
|
|
|
if (!lock_task_sighand(task, &flags))
|
2007-10-19 10:40:37 +04:00
|
|
|
return 0;
|
|
|
|
memcpy(rlim, task->signal->rlim, sizeof(struct rlimit) * RLIM_NLIMITS);
|
|
|
|
unlock_task_sighand(task, &flags);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* print the file header
|
|
|
|
*/
|
2014-08-09 01:21:37 +04:00
|
|
|
seq_printf(m, "%-25s %-20s %-20s %-10s\n",
|
2007-10-19 10:40:37 +04:00
|
|
|
"Limit", "Soft Limit", "Hard Limit", "Units");
|
|
|
|
|
|
|
|
for (i = 0; i < RLIM_NLIMITS; i++) {
|
|
|
|
if (rlim[i].rlim_cur == RLIM_INFINITY)
|
2014-08-09 01:21:37 +04:00
|
|
|
seq_printf(m, "%-25s %-20s ",
|
2007-10-19 10:40:37 +04:00
|
|
|
lnames[i].name, "unlimited");
|
|
|
|
else
|
2014-08-09 01:21:37 +04:00
|
|
|
seq_printf(m, "%-25s %-20lu ",
|
2007-10-19 10:40:37 +04:00
|
|
|
lnames[i].name, rlim[i].rlim_cur);
|
|
|
|
|
|
|
|
if (rlim[i].rlim_max == RLIM_INFINITY)
|
2014-08-09 01:21:37 +04:00
|
|
|
seq_printf(m, "%-20s ", "unlimited");
|
2007-10-19 10:40:37 +04:00
|
|
|
else
|
2014-08-09 01:21:37 +04:00
|
|
|
seq_printf(m, "%-20lu ", rlim[i].rlim_max);
|
2007-10-19 10:40:37 +04:00
|
|
|
|
|
|
|
if (lnames[i].unit)
|
2014-08-09 01:21:37 +04:00
|
|
|
seq_printf(m, "%-10s\n", lnames[i].unit);
|
2007-10-19 10:40:37 +04:00
|
|
|
else
|
2014-08-09 01:21:37 +04:00
|
|
|
seq_putc(m, '\n');
|
2007-10-19 10:40:37 +04:00
|
|
|
}
|
|
|
|
|
2014-08-09 01:21:37 +04:00
|
|
|
return 0;
|
2007-10-19 10:40:37 +04:00
|
|
|
}
|
|
|
|
|
2008-07-26 06:46:00 +04:00
|
|
|
#ifdef CONFIG_HAVE_ARCH_TRACEHOOK
|
2014-08-09 01:21:39 +04:00
|
|
|
static int proc_pid_syscall(struct seq_file *m, struct pid_namespace *ns,
|
|
|
|
struct pid *pid, struct task_struct *task)
|
2008-07-26 06:46:00 +04:00
|
|
|
{
|
|
|
|
long nr;
|
|
|
|
unsigned long args[6], sp, pc;
|
2011-03-23 22:52:50 +03:00
|
|
|
int res = lock_trace(task);
|
|
|
|
if (res)
|
|
|
|
return res;
|
2008-07-26 06:46:00 +04:00
|
|
|
|
|
|
|
if (task_current_syscall(task, &nr, args, 6, &sp, &pc))
|
2014-08-09 01:21:39 +04:00
|
|
|
seq_puts(m, "running\n");
|
2011-03-23 22:52:50 +03:00
|
|
|
else if (nr < 0)
|
2014-08-09 01:21:39 +04:00
|
|
|
seq_printf(m, "%ld 0x%lx 0x%lx\n", nr, sp, pc);
|
2011-03-23 22:52:50 +03:00
|
|
|
else
|
2014-08-09 01:21:39 +04:00
|
|
|
seq_printf(m,
|
2008-07-26 06:46:00 +04:00
|
|
|
"%ld 0x%lx 0x%lx 0x%lx 0x%lx 0x%lx 0x%lx 0x%lx 0x%lx\n",
|
|
|
|
nr,
|
|
|
|
args[0], args[1], args[2], args[3], args[4], args[5],
|
|
|
|
sp, pc);
|
2011-03-23 22:52:50 +03:00
|
|
|
unlock_trace(task);
|
|
|
|
return res;
|
2008-07-26 06:46:00 +04:00
|
|
|
}
|
|
|
|
#endif /* CONFIG_HAVE_ARCH_TRACEHOOK */
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
/************************************************************************/
|
|
|
|
/* Here the fs part begins */
|
|
|
|
/************************************************************************/
|
|
|
|
|
|
|
|
/* permission checks */
|
2006-06-26 11:25:58 +04:00
|
|
|
static int proc_fd_access_allowed(struct inode *inode)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2006-06-26 11:25:58 +04:00
|
|
|
struct task_struct *task;
|
|
|
|
int allowed = 0;
|
2006-06-26 11:25:59 +04:00
|
|
|
/* Allow access to a task's file descriptors if it is us or we
|
|
|
|
* may use ptrace attach to the process and find out that
|
|
|
|
* information.
|
2006-06-26 11:25:58 +04:00
|
|
|
*/
|
|
|
|
task = get_proc_task(inode);
|
2006-06-26 11:25:59 +04:00
|
|
|
if (task) {
|
Security: split proc ptrace checking into read vs. attach
Enable security modules to distinguish reading of process state via
proc from full ptrace access by renaming ptrace_may_attach to
ptrace_may_access and adding a mode argument indicating whether only
read access or full attach access is requested. This allows security
modules to permit access to reading process state without granting
full ptrace access. The base DAC/capability checking remains unchanged.
Read access to /proc/pid/mem continues to apply a full ptrace attach
check since check_mem_permission() already requires the current task
to already be ptracing the target. The other ptrace checks within
proc for elements like environ, maps, and fds are changed to pass the
read mode instead of attach.
In the SELinux case, we model such reading of process state as a
reading of a proc file labeled with the target process' label. This
enables SELinux policy to permit such reading of process state without
permitting control or manipulation of the target process, as there are
a number of cases where programs probe for such information via proc
but do not need to be able to control the target (e.g. procps,
lsof, PolicyKit, ConsoleKit). At present we have to choose between
allowing full ptrace in policy (more permissive than required/desired)
or breaking functionality (or in some cases just silencing the denials
via dontaudit rules but this can hide genuine attacks).
This version of the patch incorporates comments from Casey Schaufler
(change/replace existing ptrace_may_attach interface, pass access
mode), and Chris Wright (provide greater consistency in the checking).
Note that like their predecessors __ptrace_may_attach and
ptrace_may_attach, the __ptrace_may_access and ptrace_may_access
interfaces use different return value conventions from each other (0
or -errno vs. 1 or 0). I retained this difference to avoid any
changes to the caller logic but made the difference clearer by
changing the latter interface to return a bool rather than an int and
by adding a comment about it to ptrace.h for any future callers.
Signed-off-by: Stephen Smalley <sds@tycho.nsa.gov>
Acked-by: Chris Wright <chrisw@sous-sol.org>
Signed-off-by: James Morris <jmorris@namei.org>
2008-05-19 16:32:49 +04:00
|
|
|
allowed = ptrace_may_access(task, PTRACE_MODE_READ);
|
2006-06-26 11:25:58 +04:00
|
|
|
put_task_struct(task);
|
2006-06-26 11:25:59 +04:00
|
|
|
}
|
2006-06-26 11:25:58 +04:00
|
|
|
return allowed;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2010-03-08 03:41:34 +03:00
|
|
|
int proc_setattr(struct dentry *dentry, struct iattr *attr)
|
2006-07-15 23:26:45 +04:00
|
|
|
{
|
|
|
|
int error;
|
|
|
|
struct inode *inode = dentry->d_inode;
|
|
|
|
|
|
|
|
if (attr->ia_valid & ATTR_MODE)
|
|
|
|
return -EPERM;
|
|
|
|
|
|
|
|
error = inode_change_ok(inode, attr);
|
2010-06-04 13:30:02 +04:00
|
|
|
if (error)
|
|
|
|
return error;
|
|
|
|
|
|
|
|
setattr_copy(inode, attr);
|
|
|
|
mark_inode_dirty(inode);
|
|
|
|
return 0;
|
2006-07-15 23:26:45 +04:00
|
|
|
}
|
|
|
|
|
procfs: add hidepid= and gid= mount options
Add support for mount options to restrict access to /proc/PID/
directories. The default backward-compatible "relaxed" behaviour is left
untouched.
The first mount option is called "hidepid" and its value defines how much
info about processes we want to be available for non-owners:
hidepid=0 (default) means the old behavior - anybody may read all
world-readable /proc/PID/* files.
hidepid=1 means users may not access any /proc/<pid>/ directories, but
their own. Sensitive files like cmdline, sched*, status are now protected
against other users. As permission checking done in proc_pid_permission()
and files' permissions are left untouched, programs expecting specific
files' modes are not confused.
hidepid=2 means hidepid=1 plus all /proc/PID/ will be invisible to other
users. It doesn't mean that it hides whether a process exists (it can be
learned by other means, e.g. by kill -0 $PID), but it hides process' euid
and egid. It compicates intruder's task of gathering info about running
processes, whether some daemon runs with elevated privileges, whether
another user runs some sensitive program, whether other users run any
program at all, etc.
gid=XXX defines a group that will be able to gather all processes' info
(as in hidepid=0 mode). This group should be used instead of putting
nonroot user in sudoers file or something. However, untrusted users (like
daemons, etc.) which are not supposed to monitor the tasks in the whole
system should not be added to the group.
hidepid=1 or higher is designed to restrict access to procfs files, which
might reveal some sensitive private information like precise keystrokes
timings:
http://www.openwall.com/lists/oss-security/2011/11/05/3
hidepid=1/2 doesn't break monitoring userspace tools. ps, top, pgrep, and
conky gracefully handle EPERM/ENOENT and behave as if the current user is
the only user running processes. pstree shows the process subtree which
contains "pstree" process.
Note: the patch doesn't deal with setuid/setgid issues of keeping
preopened descriptors of procfs files (like
https://lkml.org/lkml/2011/2/7/368). We rely on that the leaked
information like the scheduling counters of setuid apps doesn't threaten
anybody's privacy - only the user started the setuid program may read the
counters.
Signed-off-by: Vasiliy Kulikov <segoon@openwall.com>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Randy Dunlap <rdunlap@xenotime.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Greg KH <greg@kroah.com>
Cc: Theodore Tso <tytso@MIT.EDU>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Cc: James Morris <jmorris@namei.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-01-11 03:11:31 +04:00
|
|
|
/*
|
|
|
|
* May current process learn task's sched/cmdline info (for hide_pid_min=1)
|
|
|
|
* or euid/egid (for hide_pid_min=2)?
|
|
|
|
*/
|
|
|
|
static bool has_pid_permissions(struct pid_namespace *pid,
|
|
|
|
struct task_struct *task,
|
|
|
|
int hide_pid_min)
|
|
|
|
{
|
|
|
|
if (pid->hide_pid < hide_pid_min)
|
|
|
|
return true;
|
|
|
|
if (in_group_p(pid->pid_gid))
|
|
|
|
return true;
|
|
|
|
return ptrace_may_access(task, PTRACE_MODE_READ);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static int proc_pid_permission(struct inode *inode, int mask)
|
|
|
|
{
|
|
|
|
struct pid_namespace *pid = inode->i_sb->s_fs_info;
|
|
|
|
struct task_struct *task;
|
|
|
|
bool has_perms;
|
|
|
|
|
|
|
|
task = get_proc_task(inode);
|
2012-01-13 05:17:08 +04:00
|
|
|
if (!task)
|
|
|
|
return -ESRCH;
|
procfs: add hidepid= and gid= mount options
Add support for mount options to restrict access to /proc/PID/
directories. The default backward-compatible "relaxed" behaviour is left
untouched.
The first mount option is called "hidepid" and its value defines how much
info about processes we want to be available for non-owners:
hidepid=0 (default) means the old behavior - anybody may read all
world-readable /proc/PID/* files.
hidepid=1 means users may not access any /proc/<pid>/ directories, but
their own. Sensitive files like cmdline, sched*, status are now protected
against other users. As permission checking done in proc_pid_permission()
and files' permissions are left untouched, programs expecting specific
files' modes are not confused.
hidepid=2 means hidepid=1 plus all /proc/PID/ will be invisible to other
users. It doesn't mean that it hides whether a process exists (it can be
learned by other means, e.g. by kill -0 $PID), but it hides process' euid
and egid. It compicates intruder's task of gathering info about running
processes, whether some daemon runs with elevated privileges, whether
another user runs some sensitive program, whether other users run any
program at all, etc.
gid=XXX defines a group that will be able to gather all processes' info
(as in hidepid=0 mode). This group should be used instead of putting
nonroot user in sudoers file or something. However, untrusted users (like
daemons, etc.) which are not supposed to monitor the tasks in the whole
system should not be added to the group.
hidepid=1 or higher is designed to restrict access to procfs files, which
might reveal some sensitive private information like precise keystrokes
timings:
http://www.openwall.com/lists/oss-security/2011/11/05/3
hidepid=1/2 doesn't break monitoring userspace tools. ps, top, pgrep, and
conky gracefully handle EPERM/ENOENT and behave as if the current user is
the only user running processes. pstree shows the process subtree which
contains "pstree" process.
Note: the patch doesn't deal with setuid/setgid issues of keeping
preopened descriptors of procfs files (like
https://lkml.org/lkml/2011/2/7/368). We rely on that the leaked
information like the scheduling counters of setuid apps doesn't threaten
anybody's privacy - only the user started the setuid program may read the
counters.
Signed-off-by: Vasiliy Kulikov <segoon@openwall.com>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Randy Dunlap <rdunlap@xenotime.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Greg KH <greg@kroah.com>
Cc: Theodore Tso <tytso@MIT.EDU>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Cc: James Morris <jmorris@namei.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-01-11 03:11:31 +04:00
|
|
|
has_perms = has_pid_permissions(pid, task, 1);
|
|
|
|
put_task_struct(task);
|
|
|
|
|
|
|
|
if (!has_perms) {
|
|
|
|
if (pid->hide_pid == 2) {
|
|
|
|
/*
|
|
|
|
* Let's make getdents(), stat(), and open()
|
|
|
|
* consistent with each other. If a process
|
|
|
|
* may not stat() a file, it shouldn't be seen
|
|
|
|
* in procfs at all.
|
|
|
|
*/
|
|
|
|
return -ENOENT;
|
|
|
|
}
|
|
|
|
|
|
|
|
return -EPERM;
|
|
|
|
}
|
|
|
|
return generic_permission(inode, mask);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
2007-02-12 11:55:40 +03:00
|
|
|
static const struct inode_operations proc_def_inode_operations = {
|
2006-07-15 23:26:45 +04:00
|
|
|
.setattr = proc_setattr,
|
|
|
|
};
|
|
|
|
|
2008-02-08 15:18:30 +03:00
|
|
|
static int proc_single_show(struct seq_file *m, void *v)
|
|
|
|
{
|
|
|
|
struct inode *inode = m->private;
|
|
|
|
struct pid_namespace *ns;
|
|
|
|
struct pid *pid;
|
|
|
|
struct task_struct *task;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ns = inode->i_sb->s_fs_info;
|
|
|
|
pid = proc_pid(inode);
|
|
|
|
task = get_pid_task(pid, PIDTYPE_PID);
|
|
|
|
if (!task)
|
|
|
|
return -ESRCH;
|
|
|
|
|
|
|
|
ret = PROC_I(inode)->op.proc_show(m, ns, pid, task);
|
|
|
|
|
|
|
|
put_task_struct(task);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int proc_single_open(struct inode *inode, struct file *filp)
|
|
|
|
{
|
2011-01-13 04:00:34 +03:00
|
|
|
return single_open(filp, proc_single_show, inode);
|
2008-02-08 15:18:30 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static const struct file_operations proc_single_file_operations = {
|
|
|
|
.open = proc_single_open,
|
|
|
|
.read = seq_read,
|
|
|
|
.llseek = seq_lseek,
|
|
|
|
.release = single_release,
|
|
|
|
};
|
|
|
|
|
2014-10-10 02:25:24 +04:00
|
|
|
|
|
|
|
struct mm_struct *proc_mem_open(struct inode *inode, unsigned int mode)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2014-10-10 02:25:24 +04:00
|
|
|
struct task_struct *task = get_proc_task(inode);
|
|
|
|
struct mm_struct *mm = ERR_PTR(-ESRCH);
|
2012-01-18 03:21:19 +04:00
|
|
|
|
2014-10-10 02:25:24 +04:00
|
|
|
if (task) {
|
|
|
|
mm = mm_access(task, mode);
|
|
|
|
put_task_struct(task);
|
2012-01-18 03:21:19 +04:00
|
|
|
|
2014-10-10 02:25:24 +04:00
|
|
|
if (!IS_ERR_OR_NULL(mm)) {
|
|
|
|
/* ensure this mm_struct can't be freed */
|
|
|
|
atomic_inc(&mm->mm_count);
|
|
|
|
/* but do not pin its memory */
|
|
|
|
mmput(mm);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return mm;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int __mem_open(struct inode *inode, struct file *file, unsigned int mode)
|
|
|
|
{
|
|
|
|
struct mm_struct *mm = proc_mem_open(inode, mode);
|
2012-01-18 03:21:19 +04:00
|
|
|
|
|
|
|
if (IS_ERR(mm))
|
|
|
|
return PTR_ERR(mm);
|
|
|
|
|
|
|
|
file->private_data = mm;
|
2005-04-17 02:20:36 +04:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2012-06-01 03:26:17 +04:00
|
|
|
static int mem_open(struct inode *inode, struct file *file)
|
|
|
|
{
|
2012-07-31 01:42:28 +04:00
|
|
|
int ret = __mem_open(inode, file, PTRACE_MODE_ATTACH);
|
|
|
|
|
|
|
|
/* OK to pass negative loff_t, we can catch out-of-range */
|
|
|
|
file->f_mode |= FMODE_UNSIGNED_OFFSET;
|
|
|
|
|
|
|
|
return ret;
|
2012-06-01 03:26:17 +04:00
|
|
|
}
|
|
|
|
|
2012-01-31 20:14:54 +04:00
|
|
|
static ssize_t mem_rw(struct file *file, char __user *buf,
|
|
|
|
size_t count, loff_t *ppos, int write)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2012-01-18 03:21:19 +04:00
|
|
|
struct mm_struct *mm = file->private_data;
|
2012-01-31 20:14:54 +04:00
|
|
|
unsigned long addr = *ppos;
|
|
|
|
ssize_t copied;
|
2005-04-17 02:20:36 +04:00
|
|
|
char *page;
|
|
|
|
|
2012-01-18 03:21:19 +04:00
|
|
|
if (!mm)
|
|
|
|
return 0;
|
2006-06-26 11:25:55 +04:00
|
|
|
|
2011-05-27 03:25:52 +04:00
|
|
|
page = (char *)__get_free_page(GFP_TEMPORARY);
|
|
|
|
if (!page)
|
2012-01-18 03:21:19 +04:00
|
|
|
return -ENOMEM;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-09-29 13:01:02 +04:00
|
|
|
copied = 0;
|
2012-01-31 20:15:11 +04:00
|
|
|
if (!atomic_inc_not_zero(&mm->mm_users))
|
|
|
|
goto free;
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
while (count > 0) {
|
2012-01-31 20:14:54 +04:00
|
|
|
int this_len = min_t(int, count, PAGE_SIZE);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2012-01-31 20:14:54 +04:00
|
|
|
if (write && copy_from_user(page, buf, this_len)) {
|
2005-04-17 02:20:36 +04:00
|
|
|
copied = -EFAULT;
|
|
|
|
break;
|
|
|
|
}
|
2012-01-31 20:14:54 +04:00
|
|
|
|
|
|
|
this_len = access_remote_vm(mm, addr, page, this_len, write);
|
|
|
|
if (!this_len) {
|
2005-04-17 02:20:36 +04:00
|
|
|
if (!copied)
|
|
|
|
copied = -EIO;
|
|
|
|
break;
|
|
|
|
}
|
2012-01-31 20:14:54 +04:00
|
|
|
|
|
|
|
if (!write && copy_to_user(buf, page, this_len)) {
|
|
|
|
copied = -EFAULT;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
buf += this_len;
|
|
|
|
addr += this_len;
|
|
|
|
copied += this_len;
|
|
|
|
count -= this_len;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
2012-01-31 20:14:54 +04:00
|
|
|
*ppos = addr;
|
2011-05-27 03:25:52 +04:00
|
|
|
|
2012-01-31 20:15:11 +04:00
|
|
|
mmput(mm);
|
|
|
|
free:
|
2011-05-27 03:25:52 +04:00
|
|
|
free_page((unsigned long) page);
|
2005-04-17 02:20:36 +04:00
|
|
|
return copied;
|
|
|
|
}
|
|
|
|
|
2012-01-31 20:14:54 +04:00
|
|
|
static ssize_t mem_read(struct file *file, char __user *buf,
|
|
|
|
size_t count, loff_t *ppos)
|
|
|
|
{
|
|
|
|
return mem_rw(file, buf, count, ppos, 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t mem_write(struct file *file, const char __user *buf,
|
|
|
|
size_t count, loff_t *ppos)
|
|
|
|
{
|
|
|
|
return mem_rw(file, (char __user*)buf, count, ppos, 1);
|
|
|
|
}
|
|
|
|
|
2008-02-05 09:29:04 +03:00
|
|
|
loff_t mem_lseek(struct file *file, loff_t offset, int orig)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
|
|
|
switch (orig) {
|
|
|
|
case 0:
|
|
|
|
file->f_pos = offset;
|
|
|
|
break;
|
|
|
|
case 1:
|
|
|
|
file->f_pos += offset;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
force_successful_syscall_return();
|
|
|
|
return file->f_pos;
|
|
|
|
}
|
|
|
|
|
2012-01-18 03:21:19 +04:00
|
|
|
static int mem_release(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
struct mm_struct *mm = file->private_data;
|
2012-01-31 20:14:38 +04:00
|
|
|
if (mm)
|
2012-01-31 20:15:11 +04:00
|
|
|
mmdrop(mm);
|
2012-01-18 03:21:19 +04:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2007-02-12 11:55:34 +03:00
|
|
|
static const struct file_operations proc_mem_operations = {
|
2005-04-17 02:20:36 +04:00
|
|
|
.llseek = mem_lseek,
|
|
|
|
.read = mem_read,
|
|
|
|
.write = mem_write,
|
|
|
|
.open = mem_open,
|
2012-01-18 03:21:19 +04:00
|
|
|
.release = mem_release,
|
2005-04-17 02:20:36 +04:00
|
|
|
};
|
|
|
|
|
2012-06-01 03:26:17 +04:00
|
|
|
static int environ_open(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
return __mem_open(inode, file, PTRACE_MODE_READ);
|
|
|
|
}
|
|
|
|
|
2007-10-17 10:30:17 +04:00
|
|
|
static ssize_t environ_read(struct file *file, char __user *buf,
|
|
|
|
size_t count, loff_t *ppos)
|
|
|
|
{
|
|
|
|
char *page;
|
|
|
|
unsigned long src = *ppos;
|
2012-06-01 03:26:17 +04:00
|
|
|
int ret = 0;
|
|
|
|
struct mm_struct *mm = file->private_data;
|
2007-10-17 10:30:17 +04:00
|
|
|
|
2012-06-01 03:26:17 +04:00
|
|
|
if (!mm)
|
|
|
|
return 0;
|
2007-10-17 10:30:17 +04:00
|
|
|
|
|
|
|
page = (char *)__get_free_page(GFP_TEMPORARY);
|
|
|
|
if (!page)
|
2012-06-01 03:26:17 +04:00
|
|
|
return -ENOMEM;
|
2007-10-17 10:30:17 +04:00
|
|
|
|
2011-02-16 06:26:01 +03:00
|
|
|
ret = 0;
|
2012-06-01 03:26:17 +04:00
|
|
|
if (!atomic_inc_not_zero(&mm->mm_users))
|
|
|
|
goto free;
|
2007-10-17 10:30:17 +04:00
|
|
|
while (count > 0) {
|
2012-07-31 01:42:26 +04:00
|
|
|
size_t this_len, max_len;
|
|
|
|
int retval;
|
2007-10-17 10:30:17 +04:00
|
|
|
|
2012-07-31 01:42:26 +04:00
|
|
|
if (src >= (mm->env_end - mm->env_start))
|
2007-10-17 10:30:17 +04:00
|
|
|
break;
|
|
|
|
|
2012-07-31 01:42:26 +04:00
|
|
|
this_len = mm->env_end - (mm->env_start + src);
|
|
|
|
|
|
|
|
max_len = min_t(size_t, PAGE_SIZE, count);
|
|
|
|
this_len = min(max_len, this_len);
|
2007-10-17 10:30:17 +04:00
|
|
|
|
2012-06-01 03:26:17 +04:00
|
|
|
retval = access_remote_vm(mm, (mm->env_start + src),
|
2007-10-17 10:30:17 +04:00
|
|
|
page, this_len, 0);
|
|
|
|
|
|
|
|
if (retval <= 0) {
|
|
|
|
ret = retval;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (copy_to_user(buf, page, retval)) {
|
|
|
|
ret = -EFAULT;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret += retval;
|
|
|
|
src += retval;
|
|
|
|
buf += retval;
|
|
|
|
count -= retval;
|
|
|
|
}
|
|
|
|
*ppos = src;
|
|
|
|
mmput(mm);
|
2012-06-01 03:26:17 +04:00
|
|
|
|
|
|
|
free:
|
2007-10-17 10:30:17 +04:00
|
|
|
free_page((unsigned long) page);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct file_operations proc_environ_operations = {
|
2012-06-01 03:26:17 +04:00
|
|
|
.open = environ_open,
|
2007-10-17 10:30:17 +04:00
|
|
|
.read = environ_read,
|
2010-03-18 01:06:02 +03:00
|
|
|
.llseek = generic_file_llseek,
|
2012-06-01 03:26:17 +04:00
|
|
|
.release = mem_release,
|
2007-10-17 10:30:17 +04:00
|
|
|
};
|
|
|
|
|
2012-11-13 05:53:04 +04:00
|
|
|
static ssize_t oom_adj_read(struct file *file, char __user *buf, size_t count,
|
|
|
|
loff_t *ppos)
|
|
|
|
{
|
2013-01-24 02:07:38 +04:00
|
|
|
struct task_struct *task = get_proc_task(file_inode(file));
|
2012-11-13 05:53:04 +04:00
|
|
|
char buffer[PROC_NUMBUF];
|
|
|
|
int oom_adj = OOM_ADJUST_MIN;
|
|
|
|
size_t len;
|
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
if (!task)
|
|
|
|
return -ESRCH;
|
|
|
|
if (lock_task_sighand(task, &flags)) {
|
|
|
|
if (task->signal->oom_score_adj == OOM_SCORE_ADJ_MAX)
|
|
|
|
oom_adj = OOM_ADJUST_MAX;
|
|
|
|
else
|
|
|
|
oom_adj = (task->signal->oom_score_adj * -OOM_DISABLE) /
|
|
|
|
OOM_SCORE_ADJ_MAX;
|
|
|
|
unlock_task_sighand(task, &flags);
|
|
|
|
}
|
|
|
|
put_task_struct(task);
|
|
|
|
len = snprintf(buffer, sizeof(buffer), "%d\n", oom_adj);
|
|
|
|
return simple_read_from_buffer(buf, count, ppos, buffer, len);
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t oom_adj_write(struct file *file, const char __user *buf,
|
|
|
|
size_t count, loff_t *ppos)
|
|
|
|
{
|
|
|
|
struct task_struct *task;
|
|
|
|
char buffer[PROC_NUMBUF];
|
|
|
|
int oom_adj;
|
|
|
|
unsigned long flags;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
memset(buffer, 0, sizeof(buffer));
|
|
|
|
if (count > sizeof(buffer) - 1)
|
|
|
|
count = sizeof(buffer) - 1;
|
|
|
|
if (copy_from_user(buffer, buf, count)) {
|
|
|
|
err = -EFAULT;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
err = kstrtoint(strstrip(buffer), 0, &oom_adj);
|
|
|
|
if (err)
|
|
|
|
goto out;
|
|
|
|
if ((oom_adj < OOM_ADJUST_MIN || oom_adj > OOM_ADJUST_MAX) &&
|
|
|
|
oom_adj != OOM_DISABLE) {
|
|
|
|
err = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2013-01-24 02:07:38 +04:00
|
|
|
task = get_proc_task(file_inode(file));
|
2012-11-13 05:53:04 +04:00
|
|
|
if (!task) {
|
|
|
|
err = -ESRCH;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
task_lock(task);
|
|
|
|
if (!task->mm) {
|
|
|
|
err = -EINVAL;
|
|
|
|
goto err_task_lock;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!lock_task_sighand(task, &flags)) {
|
|
|
|
err = -ESRCH;
|
|
|
|
goto err_task_lock;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Scale /proc/pid/oom_score_adj appropriately ensuring that a maximum
|
|
|
|
* value is always attainable.
|
|
|
|
*/
|
|
|
|
if (oom_adj == OOM_ADJUST_MAX)
|
|
|
|
oom_adj = OOM_SCORE_ADJ_MAX;
|
|
|
|
else
|
|
|
|
oom_adj = (oom_adj * OOM_SCORE_ADJ_MAX) / -OOM_DISABLE;
|
|
|
|
|
|
|
|
if (oom_adj < task->signal->oom_score_adj &&
|
|
|
|
!capable(CAP_SYS_RESOURCE)) {
|
|
|
|
err = -EACCES;
|
|
|
|
goto err_sighand;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* /proc/pid/oom_adj is provided for legacy purposes, ask users to use
|
|
|
|
* /proc/pid/oom_score_adj instead.
|
|
|
|
*/
|
2013-02-28 05:03:16 +04:00
|
|
|
pr_warn_once("%s (%d): /proc/%d/oom_adj is deprecated, please use /proc/%d/oom_score_adj instead.\n",
|
2012-11-13 05:53:04 +04:00
|
|
|
current->comm, task_pid_nr(current), task_pid_nr(task),
|
|
|
|
task_pid_nr(task));
|
|
|
|
|
|
|
|
task->signal->oom_score_adj = oom_adj;
|
|
|
|
trace_oom_score_adj_update(task);
|
|
|
|
err_sighand:
|
|
|
|
unlock_task_sighand(task, &flags);
|
|
|
|
err_task_lock:
|
|
|
|
task_unlock(task);
|
|
|
|
put_task_struct(task);
|
|
|
|
out:
|
|
|
|
return err < 0 ? err : count;
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct file_operations proc_oom_adj_operations = {
|
|
|
|
.read = oom_adj_read,
|
|
|
|
.write = oom_adj_write,
|
|
|
|
.llseek = generic_file_llseek,
|
|
|
|
};
|
|
|
|
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
|
|
|
static ssize_t oom_score_adj_read(struct file *file, char __user *buf,
|
|
|
|
size_t count, loff_t *ppos)
|
|
|
|
{
|
2013-01-24 02:07:38 +04:00
|
|
|
struct task_struct *task = get_proc_task(file_inode(file));
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
|
|
|
char buffer[PROC_NUMBUF];
|
2012-12-12 04:02:54 +04:00
|
|
|
short oom_score_adj = OOM_SCORE_ADJ_MIN;
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
|
|
|
unsigned long flags;
|
|
|
|
size_t len;
|
|
|
|
|
|
|
|
if (!task)
|
|
|
|
return -ESRCH;
|
|
|
|
if (lock_task_sighand(task, &flags)) {
|
|
|
|
oom_score_adj = task->signal->oom_score_adj;
|
|
|
|
unlock_task_sighand(task, &flags);
|
|
|
|
}
|
|
|
|
put_task_struct(task);
|
2012-12-12 04:02:54 +04:00
|
|
|
len = snprintf(buffer, sizeof(buffer), "%hd\n", oom_score_adj);
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
|
|
|
return simple_read_from_buffer(buf, count, ppos, buffer, len);
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t oom_score_adj_write(struct file *file, const char __user *buf,
|
|
|
|
size_t count, loff_t *ppos)
|
|
|
|
{
|
|
|
|
struct task_struct *task;
|
|
|
|
char buffer[PROC_NUMBUF];
|
|
|
|
unsigned long flags;
|
2011-05-27 03:25:50 +04:00
|
|
|
int oom_score_adj;
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
|
|
|
int err;
|
|
|
|
|
|
|
|
memset(buffer, 0, sizeof(buffer));
|
|
|
|
if (count > sizeof(buffer) - 1)
|
|
|
|
count = sizeof(buffer) - 1;
|
2010-10-27 01:21:25 +04:00
|
|
|
if (copy_from_user(buffer, buf, count)) {
|
|
|
|
err = -EFAULT;
|
|
|
|
goto out;
|
|
|
|
}
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
|
|
|
|
2011-05-27 03:25:50 +04:00
|
|
|
err = kstrtoint(strstrip(buffer), 0, &oom_score_adj);
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
|
|
|
if (err)
|
2010-10-27 01:21:25 +04:00
|
|
|
goto out;
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
|
|
|
if (oom_score_adj < OOM_SCORE_ADJ_MIN ||
|
2010-10-27 01:21:25 +04:00
|
|
|
oom_score_adj > OOM_SCORE_ADJ_MAX) {
|
|
|
|
err = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
|
|
|
|
2013-01-24 02:07:38 +04:00
|
|
|
task = get_proc_task(file_inode(file));
|
2010-10-27 01:21:25 +04:00
|
|
|
if (!task) {
|
|
|
|
err = -ESRCH;
|
|
|
|
goto out;
|
|
|
|
}
|
2010-10-27 01:21:26 +04:00
|
|
|
|
|
|
|
task_lock(task);
|
|
|
|
if (!task->mm) {
|
|
|
|
err = -EINVAL;
|
|
|
|
goto err_task_lock;
|
|
|
|
}
|
|
|
|
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
|
|
|
if (!lock_task_sighand(task, &flags)) {
|
2010-10-27 01:21:25 +04:00
|
|
|
err = -ESRCH;
|
2010-10-27 01:21:26 +04:00
|
|
|
goto err_task_lock;
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
|
|
|
}
|
2010-10-27 01:21:26 +04:00
|
|
|
|
2012-12-12 04:02:54 +04:00
|
|
|
if ((short)oom_score_adj < task->signal->oom_score_adj_min &&
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
|
|
|
!capable(CAP_SYS_RESOURCE)) {
|
2010-10-27 01:21:25 +04:00
|
|
|
err = -EACCES;
|
|
|
|
goto err_sighand;
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
|
|
|
}
|
|
|
|
|
2012-12-12 04:02:54 +04:00
|
|
|
task->signal->oom_score_adj = (short)oom_score_adj;
|
2011-01-14 02:46:05 +03:00
|
|
|
if (has_capability_noaudit(current, CAP_SYS_RESOURCE))
|
2012-12-12 04:02:54 +04:00
|
|
|
task->signal->oom_score_adj_min = (short)oom_score_adj;
|
2012-01-11 03:08:09 +04:00
|
|
|
trace_oom_score_adj_update(task);
|
2012-10-09 03:29:30 +04:00
|
|
|
|
2010-10-27 01:21:25 +04:00
|
|
|
err_sighand:
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
|
|
|
unlock_task_sighand(task, &flags);
|
2010-10-27 01:21:26 +04:00
|
|
|
err_task_lock:
|
|
|
|
task_unlock(task);
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
|
|
|
put_task_struct(task);
|
2010-10-27 01:21:25 +04:00
|
|
|
out:
|
|
|
|
return err < 0 ? err : count;
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static const struct file_operations proc_oom_score_adj_operations = {
|
|
|
|
.read = oom_score_adj_read,
|
|
|
|
.write = oom_score_adj_write,
|
llseek: automatically add .llseek fop
All file_operations should get a .llseek operation so we can make
nonseekable_open the default for future file operations without a
.llseek pointer.
The three cases that we can automatically detect are no_llseek, seq_lseek
and default_llseek. For cases where we can we can automatically prove that
the file offset is always ignored, we use noop_llseek, which maintains
the current behavior of not returning an error from a seek.
New drivers should normally not use noop_llseek but instead use no_llseek
and call nonseekable_open at open time. Existing drivers can be converted
to do the same when the maintainer knows for certain that no user code
relies on calling seek on the device file.
The generated code is often incorrectly indented and right now contains
comments that clarify for each added line why a specific variant was
chosen. In the version that gets submitted upstream, the comments will
be gone and I will manually fix the indentation, because there does not
seem to be a way to do that using coccinelle.
Some amount of new code is currently sitting in linux-next that should get
the same modifications, which I will do at the end of the merge window.
Many thanks to Julia Lawall for helping me learn to write a semantic
patch that does all this.
===== begin semantic patch =====
// This adds an llseek= method to all file operations,
// as a preparation for making no_llseek the default.
//
// The rules are
// - use no_llseek explicitly if we do nonseekable_open
// - use seq_lseek for sequential files
// - use default_llseek if we know we access f_pos
// - use noop_llseek if we know we don't access f_pos,
// but we still want to allow users to call lseek
//
@ open1 exists @
identifier nested_open;
@@
nested_open(...)
{
<+...
nonseekable_open(...)
...+>
}
@ open exists@
identifier open_f;
identifier i, f;
identifier open1.nested_open;
@@
int open_f(struct inode *i, struct file *f)
{
<+...
(
nonseekable_open(...)
|
nested_open(...)
)
...+>
}
@ read disable optional_qualifier exists @
identifier read_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
expression E;
identifier func;
@@
ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
{
<+...
(
*off = E
|
*off += E
|
func(..., off, ...)
|
E = *off
)
...+>
}
@ read_no_fpos disable optional_qualifier exists @
identifier read_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
@@
ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
{
... when != off
}
@ write @
identifier write_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
expression E;
identifier func;
@@
ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
{
<+...
(
*off = E
|
*off += E
|
func(..., off, ...)
|
E = *off
)
...+>
}
@ write_no_fpos @
identifier write_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
@@
ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
{
... when != off
}
@ fops0 @
identifier fops;
@@
struct file_operations fops = {
...
};
@ has_llseek depends on fops0 @
identifier fops0.fops;
identifier llseek_f;
@@
struct file_operations fops = {
...
.llseek = llseek_f,
...
};
@ has_read depends on fops0 @
identifier fops0.fops;
identifier read_f;
@@
struct file_operations fops = {
...
.read = read_f,
...
};
@ has_write depends on fops0 @
identifier fops0.fops;
identifier write_f;
@@
struct file_operations fops = {
...
.write = write_f,
...
};
@ has_open depends on fops0 @
identifier fops0.fops;
identifier open_f;
@@
struct file_operations fops = {
...
.open = open_f,
...
};
// use no_llseek if we call nonseekable_open
////////////////////////////////////////////
@ nonseekable1 depends on !has_llseek && has_open @
identifier fops0.fops;
identifier nso ~= "nonseekable_open";
@@
struct file_operations fops = {
... .open = nso, ...
+.llseek = no_llseek, /* nonseekable */
};
@ nonseekable2 depends on !has_llseek @
identifier fops0.fops;
identifier open.open_f;
@@
struct file_operations fops = {
... .open = open_f, ...
+.llseek = no_llseek, /* open uses nonseekable */
};
// use seq_lseek for sequential files
/////////////////////////////////////
@ seq depends on !has_llseek @
identifier fops0.fops;
identifier sr ~= "seq_read";
@@
struct file_operations fops = {
... .read = sr, ...
+.llseek = seq_lseek, /* we have seq_read */
};
// use default_llseek if there is a readdir
///////////////////////////////////////////
@ fops1 depends on !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier readdir_e;
@@
// any other fop is used that changes pos
struct file_operations fops = {
... .readdir = readdir_e, ...
+.llseek = default_llseek, /* readdir is present */
};
// use default_llseek if at least one of read/write touches f_pos
/////////////////////////////////////////////////////////////////
@ fops2 depends on !fops1 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read.read_f;
@@
// read fops use offset
struct file_operations fops = {
... .read = read_f, ...
+.llseek = default_llseek, /* read accesses f_pos */
};
@ fops3 depends on !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier write.write_f;
@@
// write fops use offset
struct file_operations fops = {
... .write = write_f, ...
+ .llseek = default_llseek, /* write accesses f_pos */
};
// Use noop_llseek if neither read nor write accesses f_pos
///////////////////////////////////////////////////////////
@ fops4 depends on !fops1 && !fops2 && !fops3 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read_no_fpos.read_f;
identifier write_no_fpos.write_f;
@@
// write fops use offset
struct file_operations fops = {
...
.write = write_f,
.read = read_f,
...
+.llseek = noop_llseek, /* read and write both use no f_pos */
};
@ depends on has_write && !has_read && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier write_no_fpos.write_f;
@@
struct file_operations fops = {
... .write = write_f, ...
+.llseek = noop_llseek, /* write uses no f_pos */
};
@ depends on has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read_no_fpos.read_f;
@@
struct file_operations fops = {
... .read = read_f, ...
+.llseek = noop_llseek, /* read uses no f_pos */
};
@ depends on !has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
@@
struct file_operations fops = {
...
+.llseek = noop_llseek, /* no read or write fn */
};
===== End semantic patch =====
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Cc: Julia Lawall <julia@diku.dk>
Cc: Christoph Hellwig <hch@infradead.org>
2010-08-15 20:52:59 +04:00
|
|
|
.llseek = default_llseek,
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
|
|
|
};
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
#ifdef CONFIG_AUDITSYSCALL
|
|
|
|
#define TMPBUFLEN 21
|
|
|
|
static ssize_t proc_loginuid_read(struct file * file, char __user * buf,
|
|
|
|
size_t count, loff_t *ppos)
|
|
|
|
{
|
2013-01-24 02:07:38 +04:00
|
|
|
struct inode * inode = file_inode(file);
|
2006-06-26 11:25:55 +04:00
|
|
|
struct task_struct *task = get_proc_task(inode);
|
2005-04-17 02:20:36 +04:00
|
|
|
ssize_t length;
|
|
|
|
char tmpbuf[TMPBUFLEN];
|
|
|
|
|
2006-06-26 11:25:55 +04:00
|
|
|
if (!task)
|
|
|
|
return -ESRCH;
|
2005-04-17 02:20:36 +04:00
|
|
|
length = scnprintf(tmpbuf, TMPBUFLEN, "%u",
|
2012-09-11 09:39:43 +04:00
|
|
|
from_kuid(file->f_cred->user_ns,
|
|
|
|
audit_get_loginuid(task)));
|
2006-06-26 11:25:55 +04:00
|
|
|
put_task_struct(task);
|
2005-04-17 02:20:36 +04:00
|
|
|
return simple_read_from_buffer(buf, count, ppos, tmpbuf, length);
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t proc_loginuid_write(struct file * file, const char __user * buf,
|
|
|
|
size_t count, loff_t *ppos)
|
|
|
|
{
|
2013-01-24 02:07:38 +04:00
|
|
|
struct inode * inode = file_inode(file);
|
2005-04-17 02:20:36 +04:00
|
|
|
char *page, *tmp;
|
|
|
|
ssize_t length;
|
|
|
|
uid_t loginuid;
|
2012-09-11 09:39:43 +04:00
|
|
|
kuid_t kloginuid;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2010-02-23 04:04:52 +03:00
|
|
|
rcu_read_lock();
|
|
|
|
if (current != pid_task(proc_pid(inode), PIDTYPE_PID)) {
|
|
|
|
rcu_read_unlock();
|
2005-04-17 02:20:36 +04:00
|
|
|
return -EPERM;
|
2010-02-23 04:04:52 +03:00
|
|
|
}
|
|
|
|
rcu_read_unlock();
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-05-18 16:28:02 +04:00
|
|
|
if (count >= PAGE_SIZE)
|
|
|
|
count = PAGE_SIZE - 1;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
if (*ppos != 0) {
|
|
|
|
/* No partial writes. */
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
2007-10-16 12:25:52 +04:00
|
|
|
page = (char*)__get_free_page(GFP_TEMPORARY);
|
2005-04-17 02:20:36 +04:00
|
|
|
if (!page)
|
|
|
|
return -ENOMEM;
|
|
|
|
length = -EFAULT;
|
|
|
|
if (copy_from_user(page, buf, count))
|
|
|
|
goto out_free_page;
|
|
|
|
|
2006-05-18 16:28:02 +04:00
|
|
|
page[count] = '\0';
|
2005-04-17 02:20:36 +04:00
|
|
|
loginuid = simple_strtoul(page, &tmp, 10);
|
|
|
|
if (tmp == page) {
|
|
|
|
length = -EINVAL;
|
|
|
|
goto out_free_page;
|
|
|
|
|
|
|
|
}
|
2013-05-24 17:49:14 +04:00
|
|
|
|
|
|
|
/* is userspace tring to explicitly UNSET the loginuid? */
|
|
|
|
if (loginuid == AUDIT_UID_UNSET) {
|
|
|
|
kloginuid = INVALID_UID;
|
|
|
|
} else {
|
|
|
|
kloginuid = make_kuid(file->f_cred->user_ns, loginuid);
|
|
|
|
if (!uid_valid(kloginuid)) {
|
|
|
|
length = -EINVAL;
|
|
|
|
goto out_free_page;
|
|
|
|
}
|
2012-09-11 09:39:43 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
length = audit_set_loginuid(kloginuid);
|
2005-04-17 02:20:36 +04:00
|
|
|
if (likely(length == 0))
|
|
|
|
length = count;
|
|
|
|
|
|
|
|
out_free_page:
|
|
|
|
free_page((unsigned long) page);
|
|
|
|
return length;
|
|
|
|
}
|
|
|
|
|
2007-02-12 11:55:34 +03:00
|
|
|
static const struct file_operations proc_loginuid_operations = {
|
2005-04-17 02:20:36 +04:00
|
|
|
.read = proc_loginuid_read,
|
|
|
|
.write = proc_loginuid_write,
|
2010-03-18 01:06:02 +03:00
|
|
|
.llseek = generic_file_llseek,
|
2005-04-17 02:20:36 +04:00
|
|
|
};
|
2008-03-13 15:15:31 +03:00
|
|
|
|
|
|
|
static ssize_t proc_sessionid_read(struct file * file, char __user * buf,
|
|
|
|
size_t count, loff_t *ppos)
|
|
|
|
{
|
2013-01-24 02:07:38 +04:00
|
|
|
struct inode * inode = file_inode(file);
|
2008-03-13 15:15:31 +03:00
|
|
|
struct task_struct *task = get_proc_task(inode);
|
|
|
|
ssize_t length;
|
|
|
|
char tmpbuf[TMPBUFLEN];
|
|
|
|
|
|
|
|
if (!task)
|
|
|
|
return -ESRCH;
|
|
|
|
length = scnprintf(tmpbuf, TMPBUFLEN, "%u",
|
|
|
|
audit_get_sessionid(task));
|
|
|
|
put_task_struct(task);
|
|
|
|
return simple_read_from_buffer(buf, count, ppos, tmpbuf, length);
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct file_operations proc_sessionid_operations = {
|
|
|
|
.read = proc_sessionid_read,
|
2010-03-18 01:06:02 +03:00
|
|
|
.llseek = generic_file_llseek,
|
2008-03-13 15:15:31 +03:00
|
|
|
};
|
2005-04-17 02:20:36 +04:00
|
|
|
#endif
|
|
|
|
|
2006-12-08 13:39:47 +03:00
|
|
|
#ifdef CONFIG_FAULT_INJECTION
|
|
|
|
static ssize_t proc_fault_inject_read(struct file * file, char __user * buf,
|
|
|
|
size_t count, loff_t *ppos)
|
|
|
|
{
|
2013-01-24 02:07:38 +04:00
|
|
|
struct task_struct *task = get_proc_task(file_inode(file));
|
2006-12-08 13:39:47 +03:00
|
|
|
char buffer[PROC_NUMBUF];
|
|
|
|
size_t len;
|
|
|
|
int make_it_fail;
|
|
|
|
|
|
|
|
if (!task)
|
|
|
|
return -ESRCH;
|
|
|
|
make_it_fail = task->make_it_fail;
|
|
|
|
put_task_struct(task);
|
|
|
|
|
|
|
|
len = snprintf(buffer, sizeof(buffer), "%i\n", make_it_fail);
|
2007-05-08 11:31:41 +04:00
|
|
|
|
|
|
|
return simple_read_from_buffer(buf, count, ppos, buffer, len);
|
2006-12-08 13:39:47 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t proc_fault_inject_write(struct file * file,
|
|
|
|
const char __user * buf, size_t count, loff_t *ppos)
|
|
|
|
{
|
|
|
|
struct task_struct *task;
|
|
|
|
char buffer[PROC_NUMBUF], *end;
|
|
|
|
int make_it_fail;
|
|
|
|
|
|
|
|
if (!capable(CAP_SYS_RESOURCE))
|
|
|
|
return -EPERM;
|
|
|
|
memset(buffer, 0, sizeof(buffer));
|
|
|
|
if (count > sizeof(buffer) - 1)
|
|
|
|
count = sizeof(buffer) - 1;
|
|
|
|
if (copy_from_user(buffer, buf, count))
|
|
|
|
return -EFAULT;
|
2009-09-23 03:45:38 +04:00
|
|
|
make_it_fail = simple_strtol(strstrip(buffer), &end, 0);
|
|
|
|
if (*end)
|
|
|
|
return -EINVAL;
|
2014-04-08 02:39:15 +04:00
|
|
|
if (make_it_fail < 0 || make_it_fail > 1)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2013-01-24 02:07:38 +04:00
|
|
|
task = get_proc_task(file_inode(file));
|
2006-12-08 13:39:47 +03:00
|
|
|
if (!task)
|
|
|
|
return -ESRCH;
|
|
|
|
task->make_it_fail = make_it_fail;
|
|
|
|
put_task_struct(task);
|
2009-09-23 03:45:38 +04:00
|
|
|
|
|
|
|
return count;
|
2006-12-08 13:39:47 +03:00
|
|
|
}
|
|
|
|
|
2007-02-12 11:55:34 +03:00
|
|
|
static const struct file_operations proc_fault_inject_operations = {
|
2006-12-08 13:39:47 +03:00
|
|
|
.read = proc_fault_inject_read,
|
|
|
|
.write = proc_fault_inject_write,
|
2010-03-18 01:06:02 +03:00
|
|
|
.llseek = generic_file_llseek,
|
2006-12-08 13:39:47 +03:00
|
|
|
};
|
|
|
|
#endif
|
|
|
|
|
2008-01-25 23:08:34 +03:00
|
|
|
|
2007-07-09 20:52:00 +04:00
|
|
|
#ifdef CONFIG_SCHED_DEBUG
|
|
|
|
/*
|
|
|
|
* Print out various scheduling related per-task fields:
|
|
|
|
*/
|
|
|
|
static int sched_show(struct seq_file *m, void *v)
|
|
|
|
{
|
|
|
|
struct inode *inode = m->private;
|
|
|
|
struct task_struct *p;
|
|
|
|
|
|
|
|
p = get_proc_task(inode);
|
|
|
|
if (!p)
|
|
|
|
return -ESRCH;
|
|
|
|
proc_sched_show_task(p, m);
|
|
|
|
|
|
|
|
put_task_struct(p);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t
|
|
|
|
sched_write(struct file *file, const char __user *buf,
|
|
|
|
size_t count, loff_t *offset)
|
|
|
|
{
|
2013-01-24 02:07:38 +04:00
|
|
|
struct inode *inode = file_inode(file);
|
2007-07-09 20:52:00 +04:00
|
|
|
struct task_struct *p;
|
|
|
|
|
|
|
|
p = get_proc_task(inode);
|
|
|
|
if (!p)
|
|
|
|
return -ESRCH;
|
|
|
|
proc_sched_set_task(p);
|
|
|
|
|
|
|
|
put_task_struct(p);
|
|
|
|
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int sched_open(struct inode *inode, struct file *filp)
|
|
|
|
{
|
2011-01-13 04:00:34 +03:00
|
|
|
return single_open(filp, sched_show, inode);
|
2007-07-09 20:52:00 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static const struct file_operations proc_pid_sched_operations = {
|
|
|
|
.open = sched_open,
|
|
|
|
.read = seq_read,
|
|
|
|
.write = sched_write,
|
|
|
|
.llseek = seq_lseek,
|
2007-07-31 11:38:50 +04:00
|
|
|
.release = single_release,
|
2007-07-09 20:52:00 +04:00
|
|
|
};
|
|
|
|
|
|
|
|
#endif
|
|
|
|
|
sched: Add 'autogroup' scheduling feature: automated per session task groups
A recurring complaint from CFS users is that parallel kbuild has
a negative impact on desktop interactivity. This patch
implements an idea from Linus, to automatically create task
groups. Currently, only per session autogroups are implemented,
but the patch leaves the way open for enhancement.
Implementation: each task's signal struct contains an inherited
pointer to a refcounted autogroup struct containing a task group
pointer, the default for all tasks pointing to the
init_task_group. When a task calls setsid(), a new task group
is created, the process is moved into the new task group, and a
reference to the preveious task group is dropped. Child
processes inherit this task group thereafter, and increase it's
refcount. When the last thread of a process exits, the
process's reference is dropped, such that when the last process
referencing an autogroup exits, the autogroup is destroyed.
At runqueue selection time, IFF a task has no cgroup assignment,
its current autogroup is used.
Autogroup bandwidth is controllable via setting it's nice level
through the proc filesystem:
cat /proc/<pid>/autogroup
Displays the task's group and the group's nice level.
echo <nice level> > /proc/<pid>/autogroup
Sets the task group's shares to the weight of nice <level> task.
Setting nice level is rate limited for !admin users due to the
abuse risk of task group locking.
The feature is enabled from boot by default if
CONFIG_SCHED_AUTOGROUP=y is selected, but can be disabled via
the boot option noautogroup, and can also be turned on/off on
the fly via:
echo [01] > /proc/sys/kernel/sched_autogroup_enabled
... which will automatically move tasks to/from the root task group.
Signed-off-by: Mike Galbraith <efault@gmx.de>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Markus Trippelsdorf <markus@trippelsdorf.de>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Paul Turner <pjt@google.com>
Cc: Oleg Nesterov <oleg@redhat.com>
[ Removed the task_group_path() debug code, and fixed !EVENTFD build failure. ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
LKML-Reference: <1290281700.28711.9.camel@maggy.simson.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-11-30 16:18:03 +03:00
|
|
|
#ifdef CONFIG_SCHED_AUTOGROUP
|
|
|
|
/*
|
|
|
|
* Print out autogroup related information:
|
|
|
|
*/
|
|
|
|
static int sched_autogroup_show(struct seq_file *m, void *v)
|
|
|
|
{
|
|
|
|
struct inode *inode = m->private;
|
|
|
|
struct task_struct *p;
|
|
|
|
|
|
|
|
p = get_proc_task(inode);
|
|
|
|
if (!p)
|
|
|
|
return -ESRCH;
|
|
|
|
proc_sched_autogroup_show_task(p, m);
|
|
|
|
|
|
|
|
put_task_struct(p);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t
|
|
|
|
sched_autogroup_write(struct file *file, const char __user *buf,
|
|
|
|
size_t count, loff_t *offset)
|
|
|
|
{
|
2013-01-24 02:07:38 +04:00
|
|
|
struct inode *inode = file_inode(file);
|
sched: Add 'autogroup' scheduling feature: automated per session task groups
A recurring complaint from CFS users is that parallel kbuild has
a negative impact on desktop interactivity. This patch
implements an idea from Linus, to automatically create task
groups. Currently, only per session autogroups are implemented,
but the patch leaves the way open for enhancement.
Implementation: each task's signal struct contains an inherited
pointer to a refcounted autogroup struct containing a task group
pointer, the default for all tasks pointing to the
init_task_group. When a task calls setsid(), a new task group
is created, the process is moved into the new task group, and a
reference to the preveious task group is dropped. Child
processes inherit this task group thereafter, and increase it's
refcount. When the last thread of a process exits, the
process's reference is dropped, such that when the last process
referencing an autogroup exits, the autogroup is destroyed.
At runqueue selection time, IFF a task has no cgroup assignment,
its current autogroup is used.
Autogroup bandwidth is controllable via setting it's nice level
through the proc filesystem:
cat /proc/<pid>/autogroup
Displays the task's group and the group's nice level.
echo <nice level> > /proc/<pid>/autogroup
Sets the task group's shares to the weight of nice <level> task.
Setting nice level is rate limited for !admin users due to the
abuse risk of task group locking.
The feature is enabled from boot by default if
CONFIG_SCHED_AUTOGROUP=y is selected, but can be disabled via
the boot option noautogroup, and can also be turned on/off on
the fly via:
echo [01] > /proc/sys/kernel/sched_autogroup_enabled
... which will automatically move tasks to/from the root task group.
Signed-off-by: Mike Galbraith <efault@gmx.de>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Markus Trippelsdorf <markus@trippelsdorf.de>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Paul Turner <pjt@google.com>
Cc: Oleg Nesterov <oleg@redhat.com>
[ Removed the task_group_path() debug code, and fixed !EVENTFD build failure. ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
LKML-Reference: <1290281700.28711.9.camel@maggy.simson.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-11-30 16:18:03 +03:00
|
|
|
struct task_struct *p;
|
|
|
|
char buffer[PROC_NUMBUF];
|
2011-05-27 03:25:50 +04:00
|
|
|
int nice;
|
sched: Add 'autogroup' scheduling feature: automated per session task groups
A recurring complaint from CFS users is that parallel kbuild has
a negative impact on desktop interactivity. This patch
implements an idea from Linus, to automatically create task
groups. Currently, only per session autogroups are implemented,
but the patch leaves the way open for enhancement.
Implementation: each task's signal struct contains an inherited
pointer to a refcounted autogroup struct containing a task group
pointer, the default for all tasks pointing to the
init_task_group. When a task calls setsid(), a new task group
is created, the process is moved into the new task group, and a
reference to the preveious task group is dropped. Child
processes inherit this task group thereafter, and increase it's
refcount. When the last thread of a process exits, the
process's reference is dropped, such that when the last process
referencing an autogroup exits, the autogroup is destroyed.
At runqueue selection time, IFF a task has no cgroup assignment,
its current autogroup is used.
Autogroup bandwidth is controllable via setting it's nice level
through the proc filesystem:
cat /proc/<pid>/autogroup
Displays the task's group and the group's nice level.
echo <nice level> > /proc/<pid>/autogroup
Sets the task group's shares to the weight of nice <level> task.
Setting nice level is rate limited for !admin users due to the
abuse risk of task group locking.
The feature is enabled from boot by default if
CONFIG_SCHED_AUTOGROUP=y is selected, but can be disabled via
the boot option noautogroup, and can also be turned on/off on
the fly via:
echo [01] > /proc/sys/kernel/sched_autogroup_enabled
... which will automatically move tasks to/from the root task group.
Signed-off-by: Mike Galbraith <efault@gmx.de>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Markus Trippelsdorf <markus@trippelsdorf.de>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Paul Turner <pjt@google.com>
Cc: Oleg Nesterov <oleg@redhat.com>
[ Removed the task_group_path() debug code, and fixed !EVENTFD build failure. ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
LKML-Reference: <1290281700.28711.9.camel@maggy.simson.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-11-30 16:18:03 +03:00
|
|
|
int err;
|
|
|
|
|
|
|
|
memset(buffer, 0, sizeof(buffer));
|
|
|
|
if (count > sizeof(buffer) - 1)
|
|
|
|
count = sizeof(buffer) - 1;
|
|
|
|
if (copy_from_user(buffer, buf, count))
|
|
|
|
return -EFAULT;
|
|
|
|
|
2011-05-27 03:25:50 +04:00
|
|
|
err = kstrtoint(strstrip(buffer), 0, &nice);
|
|
|
|
if (err < 0)
|
|
|
|
return err;
|
sched: Add 'autogroup' scheduling feature: automated per session task groups
A recurring complaint from CFS users is that parallel kbuild has
a negative impact on desktop interactivity. This patch
implements an idea from Linus, to automatically create task
groups. Currently, only per session autogroups are implemented,
but the patch leaves the way open for enhancement.
Implementation: each task's signal struct contains an inherited
pointer to a refcounted autogroup struct containing a task group
pointer, the default for all tasks pointing to the
init_task_group. When a task calls setsid(), a new task group
is created, the process is moved into the new task group, and a
reference to the preveious task group is dropped. Child
processes inherit this task group thereafter, and increase it's
refcount. When the last thread of a process exits, the
process's reference is dropped, such that when the last process
referencing an autogroup exits, the autogroup is destroyed.
At runqueue selection time, IFF a task has no cgroup assignment,
its current autogroup is used.
Autogroup bandwidth is controllable via setting it's nice level
through the proc filesystem:
cat /proc/<pid>/autogroup
Displays the task's group and the group's nice level.
echo <nice level> > /proc/<pid>/autogroup
Sets the task group's shares to the weight of nice <level> task.
Setting nice level is rate limited for !admin users due to the
abuse risk of task group locking.
The feature is enabled from boot by default if
CONFIG_SCHED_AUTOGROUP=y is selected, but can be disabled via
the boot option noautogroup, and can also be turned on/off on
the fly via:
echo [01] > /proc/sys/kernel/sched_autogroup_enabled
... which will automatically move tasks to/from the root task group.
Signed-off-by: Mike Galbraith <efault@gmx.de>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Markus Trippelsdorf <markus@trippelsdorf.de>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Paul Turner <pjt@google.com>
Cc: Oleg Nesterov <oleg@redhat.com>
[ Removed the task_group_path() debug code, and fixed !EVENTFD build failure. ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
LKML-Reference: <1290281700.28711.9.camel@maggy.simson.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-11-30 16:18:03 +03:00
|
|
|
|
|
|
|
p = get_proc_task(inode);
|
|
|
|
if (!p)
|
|
|
|
return -ESRCH;
|
|
|
|
|
2012-02-23 12:41:27 +04:00
|
|
|
err = proc_sched_autogroup_set_nice(p, nice);
|
sched: Add 'autogroup' scheduling feature: automated per session task groups
A recurring complaint from CFS users is that parallel kbuild has
a negative impact on desktop interactivity. This patch
implements an idea from Linus, to automatically create task
groups. Currently, only per session autogroups are implemented,
but the patch leaves the way open for enhancement.
Implementation: each task's signal struct contains an inherited
pointer to a refcounted autogroup struct containing a task group
pointer, the default for all tasks pointing to the
init_task_group. When a task calls setsid(), a new task group
is created, the process is moved into the new task group, and a
reference to the preveious task group is dropped. Child
processes inherit this task group thereafter, and increase it's
refcount. When the last thread of a process exits, the
process's reference is dropped, such that when the last process
referencing an autogroup exits, the autogroup is destroyed.
At runqueue selection time, IFF a task has no cgroup assignment,
its current autogroup is used.
Autogroup bandwidth is controllable via setting it's nice level
through the proc filesystem:
cat /proc/<pid>/autogroup
Displays the task's group and the group's nice level.
echo <nice level> > /proc/<pid>/autogroup
Sets the task group's shares to the weight of nice <level> task.
Setting nice level is rate limited for !admin users due to the
abuse risk of task group locking.
The feature is enabled from boot by default if
CONFIG_SCHED_AUTOGROUP=y is selected, but can be disabled via
the boot option noautogroup, and can also be turned on/off on
the fly via:
echo [01] > /proc/sys/kernel/sched_autogroup_enabled
... which will automatically move tasks to/from the root task group.
Signed-off-by: Mike Galbraith <efault@gmx.de>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Markus Trippelsdorf <markus@trippelsdorf.de>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Paul Turner <pjt@google.com>
Cc: Oleg Nesterov <oleg@redhat.com>
[ Removed the task_group_path() debug code, and fixed !EVENTFD build failure. ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
LKML-Reference: <1290281700.28711.9.camel@maggy.simson.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-11-30 16:18:03 +03:00
|
|
|
if (err)
|
|
|
|
count = err;
|
|
|
|
|
|
|
|
put_task_struct(p);
|
|
|
|
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int sched_autogroup_open(struct inode *inode, struct file *filp)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = single_open(filp, sched_autogroup_show, NULL);
|
|
|
|
if (!ret) {
|
|
|
|
struct seq_file *m = filp->private_data;
|
|
|
|
|
|
|
|
m->private = inode;
|
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct file_operations proc_pid_sched_autogroup_operations = {
|
|
|
|
.open = sched_autogroup_open,
|
|
|
|
.read = seq_read,
|
|
|
|
.write = sched_autogroup_write,
|
|
|
|
.llseek = seq_lseek,
|
|
|
|
.release = single_release,
|
|
|
|
};
|
|
|
|
|
|
|
|
#endif /* CONFIG_SCHED_AUTOGROUP */
|
|
|
|
|
2009-12-15 05:00:05 +03:00
|
|
|
static ssize_t comm_write(struct file *file, const char __user *buf,
|
|
|
|
size_t count, loff_t *offset)
|
|
|
|
{
|
2013-01-24 02:07:38 +04:00
|
|
|
struct inode *inode = file_inode(file);
|
2009-12-15 05:00:05 +03:00
|
|
|
struct task_struct *p;
|
|
|
|
char buffer[TASK_COMM_LEN];
|
2013-05-01 02:28:18 +04:00
|
|
|
const size_t maxlen = sizeof(buffer) - 1;
|
2009-12-15 05:00:05 +03:00
|
|
|
|
|
|
|
memset(buffer, 0, sizeof(buffer));
|
2013-05-01 02:28:18 +04:00
|
|
|
if (copy_from_user(buffer, buf, count > maxlen ? maxlen : count))
|
2009-12-15 05:00:05 +03:00
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
p = get_proc_task(inode);
|
|
|
|
if (!p)
|
|
|
|
return -ESRCH;
|
|
|
|
|
|
|
|
if (same_thread_group(current, p))
|
|
|
|
set_task_comm(p, buffer);
|
|
|
|
else
|
|
|
|
count = -EINVAL;
|
|
|
|
|
|
|
|
put_task_struct(p);
|
|
|
|
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int comm_show(struct seq_file *m, void *v)
|
|
|
|
{
|
|
|
|
struct inode *inode = m->private;
|
|
|
|
struct task_struct *p;
|
|
|
|
|
|
|
|
p = get_proc_task(inode);
|
|
|
|
if (!p)
|
|
|
|
return -ESRCH;
|
|
|
|
|
|
|
|
task_lock(p);
|
|
|
|
seq_printf(m, "%s\n", p->comm);
|
|
|
|
task_unlock(p);
|
|
|
|
|
|
|
|
put_task_struct(p);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int comm_open(struct inode *inode, struct file *filp)
|
|
|
|
{
|
2011-01-13 04:00:34 +03:00
|
|
|
return single_open(filp, comm_show, inode);
|
2009-12-15 05:00:05 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static const struct file_operations proc_pid_set_comm_operations = {
|
|
|
|
.open = comm_open,
|
|
|
|
.read = seq_read,
|
|
|
|
.write = comm_write,
|
|
|
|
.llseek = seq_lseek,
|
|
|
|
.release = single_release,
|
|
|
|
};
|
|
|
|
|
2012-01-11 03:11:20 +04:00
|
|
|
static int proc_exe_link(struct dentry *dentry, struct path *exe_path)
|
2008-04-29 12:01:36 +04:00
|
|
|
{
|
|
|
|
struct task_struct *task;
|
|
|
|
struct mm_struct *mm;
|
|
|
|
struct file *exe_file;
|
|
|
|
|
2012-01-11 03:11:20 +04:00
|
|
|
task = get_proc_task(dentry->d_inode);
|
2008-04-29 12:01:36 +04:00
|
|
|
if (!task)
|
|
|
|
return -ENOENT;
|
|
|
|
mm = get_task_mm(task);
|
|
|
|
put_task_struct(task);
|
|
|
|
if (!mm)
|
|
|
|
return -ENOENT;
|
|
|
|
exe_file = get_mm_exe_file(mm);
|
|
|
|
mmput(mm);
|
|
|
|
if (exe_file) {
|
|
|
|
*exe_path = exe_file->f_path;
|
|
|
|
path_get(&exe_file->f_path);
|
|
|
|
fput(exe_file);
|
|
|
|
return 0;
|
|
|
|
} else
|
|
|
|
return -ENOENT;
|
|
|
|
}
|
|
|
|
|
[PATCH] Fix up symlink function pointers
This fixes up the symlink functions for the calling convention change:
* afs, autofs4, befs, devfs, freevxfs, jffs2, jfs, ncpfs, procfs,
smbfs, sysvfs, ufs, xfs - prototype change for ->follow_link()
* befs, smbfs, xfs - same for ->put_link()
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-08-20 03:17:39 +04:00
|
|
|
static void *proc_pid_follow_link(struct dentry *dentry, struct nameidata *nd)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
|
|
|
struct inode *inode = dentry->d_inode;
|
2012-06-18 18:47:03 +04:00
|
|
|
struct path path;
|
2005-04-17 02:20:36 +04:00
|
|
|
int error = -EACCES;
|
|
|
|
|
2006-06-26 11:25:58 +04:00
|
|
|
/* Are we allowed to snoop on the tasks file descriptors? */
|
|
|
|
if (!proc_fd_access_allowed(inode))
|
2005-04-17 02:20:36 +04:00
|
|
|
goto out;
|
|
|
|
|
2012-06-18 18:47:03 +04:00
|
|
|
error = PROC_I(inode)->op.proc_get_link(dentry, &path);
|
|
|
|
if (error)
|
|
|
|
goto out;
|
|
|
|
|
2012-06-18 18:47:04 +04:00
|
|
|
nd_jump_link(nd, &path);
|
2012-06-18 18:47:03 +04:00
|
|
|
return NULL;
|
2005-04-17 02:20:36 +04:00
|
|
|
out:
|
[PATCH] Fix up symlink function pointers
This fixes up the symlink functions for the calling convention change:
* afs, autofs4, befs, devfs, freevxfs, jffs2, jfs, ncpfs, procfs,
smbfs, sysvfs, ufs, xfs - prototype change for ->follow_link()
* befs, smbfs, xfs - same for ->put_link()
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-08-20 03:17:39 +04:00
|
|
|
return ERR_PTR(error);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2008-02-15 06:38:35 +03:00
|
|
|
static int do_proc_readlink(struct path *path, char __user *buffer, int buflen)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2007-10-16 12:25:52 +04:00
|
|
|
char *tmp = (char*)__get_free_page(GFP_TEMPORARY);
|
2008-02-15 06:38:35 +03:00
|
|
|
char *pathname;
|
2005-04-17 02:20:36 +04:00
|
|
|
int len;
|
|
|
|
|
|
|
|
if (!tmp)
|
|
|
|
return -ENOMEM;
|
2007-05-08 11:31:41 +04:00
|
|
|
|
2010-12-06 02:51:21 +03:00
|
|
|
pathname = d_path(path, tmp, PAGE_SIZE);
|
2008-02-15 06:38:35 +03:00
|
|
|
len = PTR_ERR(pathname);
|
|
|
|
if (IS_ERR(pathname))
|
2005-04-17 02:20:36 +04:00
|
|
|
goto out;
|
2008-02-15 06:38:35 +03:00
|
|
|
len = tmp + PAGE_SIZE - 1 - pathname;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
if (len > buflen)
|
|
|
|
len = buflen;
|
2008-02-15 06:38:35 +03:00
|
|
|
if (copy_to_user(buffer, pathname, len))
|
2005-04-17 02:20:36 +04:00
|
|
|
len = -EFAULT;
|
|
|
|
out:
|
|
|
|
free_page((unsigned long)tmp);
|
|
|
|
return len;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int proc_pid_readlink(struct dentry * dentry, char __user * buffer, int buflen)
|
|
|
|
{
|
|
|
|
int error = -EACCES;
|
|
|
|
struct inode *inode = dentry->d_inode;
|
2008-02-15 06:38:35 +03:00
|
|
|
struct path path;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-06-26 11:25:58 +04:00
|
|
|
/* Are we allowed to snoop on the tasks file descriptors? */
|
|
|
|
if (!proc_fd_access_allowed(inode))
|
2005-04-17 02:20:36 +04:00
|
|
|
goto out;
|
|
|
|
|
2012-01-11 03:11:20 +04:00
|
|
|
error = PROC_I(inode)->op.proc_get_link(dentry, &path);
|
2005-04-17 02:20:36 +04:00
|
|
|
if (error)
|
|
|
|
goto out;
|
|
|
|
|
2008-02-15 06:38:35 +03:00
|
|
|
error = do_proc_readlink(&path, buffer, buflen);
|
|
|
|
path_put(&path);
|
2005-04-17 02:20:36 +04:00
|
|
|
out:
|
|
|
|
return error;
|
|
|
|
}
|
|
|
|
|
2012-08-23 14:43:24 +04:00
|
|
|
const struct inode_operations proc_pid_link_inode_operations = {
|
2005-04-17 02:20:36 +04:00
|
|
|
.readlink = proc_pid_readlink,
|
2006-07-15 23:26:45 +04:00
|
|
|
.follow_link = proc_pid_follow_link,
|
|
|
|
.setattr = proc_setattr,
|
2005-04-17 02:20:36 +04:00
|
|
|
};
|
|
|
|
|
2006-10-02 13:17:05 +04:00
|
|
|
|
|
|
|
/* building an inode */
|
|
|
|
|
2010-03-08 03:41:34 +03:00
|
|
|
struct inode *proc_pid_make_inode(struct super_block * sb, struct task_struct *task)
|
2006-10-02 13:17:05 +04:00
|
|
|
{
|
|
|
|
struct inode * inode;
|
|
|
|
struct proc_inode *ei;
|
2008-11-14 02:39:19 +03:00
|
|
|
const struct cred *cred;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-10-02 13:17:05 +04:00
|
|
|
/* We need a new inode */
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-10-02 13:17:05 +04:00
|
|
|
inode = new_inode(sb);
|
|
|
|
if (!inode)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
/* Common stuff */
|
|
|
|
ei = PROC_I(inode);
|
2010-10-23 19:19:54 +04:00
|
|
|
inode->i_ino = get_next_ino();
|
2006-10-02 13:17:05 +04:00
|
|
|
inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME;
|
|
|
|
inode->i_op = &proc_def_inode_operations;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* grab the reference to task.
|
|
|
|
*/
|
2006-10-02 13:18:59 +04:00
|
|
|
ei->pid = get_task_pid(task, PIDTYPE_PID);
|
2006-10-02 13:17:05 +04:00
|
|
|
if (!ei->pid)
|
|
|
|
goto out_unlock;
|
|
|
|
|
|
|
|
if (task_dumpable(task)) {
|
2008-11-14 02:39:19 +03:00
|
|
|
rcu_read_lock();
|
|
|
|
cred = __task_cred(task);
|
|
|
|
inode->i_uid = cred->euid;
|
|
|
|
inode->i_gid = cred->egid;
|
|
|
|
rcu_read_unlock();
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
2006-10-02 13:17:05 +04:00
|
|
|
security_task_to_inode(task, inode);
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
out:
|
2006-10-02 13:17:05 +04:00
|
|
|
return inode;
|
|
|
|
|
|
|
|
out_unlock:
|
|
|
|
iput(inode);
|
|
|
|
return NULL;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2010-03-08 03:41:34 +03:00
|
|
|
int pid_getattr(struct vfsmount *mnt, struct dentry *dentry, struct kstat *stat)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
|
|
|
struct inode *inode = dentry->d_inode;
|
2006-10-02 13:17:05 +04:00
|
|
|
struct task_struct *task;
|
2008-11-14 02:39:19 +03:00
|
|
|
const struct cred *cred;
|
procfs: add hidepid= and gid= mount options
Add support for mount options to restrict access to /proc/PID/
directories. The default backward-compatible "relaxed" behaviour is left
untouched.
The first mount option is called "hidepid" and its value defines how much
info about processes we want to be available for non-owners:
hidepid=0 (default) means the old behavior - anybody may read all
world-readable /proc/PID/* files.
hidepid=1 means users may not access any /proc/<pid>/ directories, but
their own. Sensitive files like cmdline, sched*, status are now protected
against other users. As permission checking done in proc_pid_permission()
and files' permissions are left untouched, programs expecting specific
files' modes are not confused.
hidepid=2 means hidepid=1 plus all /proc/PID/ will be invisible to other
users. It doesn't mean that it hides whether a process exists (it can be
learned by other means, e.g. by kill -0 $PID), but it hides process' euid
and egid. It compicates intruder's task of gathering info about running
processes, whether some daemon runs with elevated privileges, whether
another user runs some sensitive program, whether other users run any
program at all, etc.
gid=XXX defines a group that will be able to gather all processes' info
(as in hidepid=0 mode). This group should be used instead of putting
nonroot user in sudoers file or something. However, untrusted users (like
daemons, etc.) which are not supposed to monitor the tasks in the whole
system should not be added to the group.
hidepid=1 or higher is designed to restrict access to procfs files, which
might reveal some sensitive private information like precise keystrokes
timings:
http://www.openwall.com/lists/oss-security/2011/11/05/3
hidepid=1/2 doesn't break monitoring userspace tools. ps, top, pgrep, and
conky gracefully handle EPERM/ENOENT and behave as if the current user is
the only user running processes. pstree shows the process subtree which
contains "pstree" process.
Note: the patch doesn't deal with setuid/setgid issues of keeping
preopened descriptors of procfs files (like
https://lkml.org/lkml/2011/2/7/368). We rely on that the leaked
information like the scheduling counters of setuid apps doesn't threaten
anybody's privacy - only the user started the setuid program may read the
counters.
Signed-off-by: Vasiliy Kulikov <segoon@openwall.com>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Randy Dunlap <rdunlap@xenotime.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Greg KH <greg@kroah.com>
Cc: Theodore Tso <tytso@MIT.EDU>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Cc: James Morris <jmorris@namei.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-01-11 03:11:31 +04:00
|
|
|
struct pid_namespace *pid = dentry->d_sb->s_fs_info;
|
2008-11-14 02:39:19 +03:00
|
|
|
|
2006-10-02 13:17:05 +04:00
|
|
|
generic_fillattr(inode, stat);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-10-02 13:17:05 +04:00
|
|
|
rcu_read_lock();
|
2012-02-09 20:48:21 +04:00
|
|
|
stat->uid = GLOBAL_ROOT_UID;
|
|
|
|
stat->gid = GLOBAL_ROOT_GID;
|
2006-10-02 13:17:05 +04:00
|
|
|
task = pid_task(proc_pid(inode), PIDTYPE_PID);
|
|
|
|
if (task) {
|
procfs: add hidepid= and gid= mount options
Add support for mount options to restrict access to /proc/PID/
directories. The default backward-compatible "relaxed" behaviour is left
untouched.
The first mount option is called "hidepid" and its value defines how much
info about processes we want to be available for non-owners:
hidepid=0 (default) means the old behavior - anybody may read all
world-readable /proc/PID/* files.
hidepid=1 means users may not access any /proc/<pid>/ directories, but
their own. Sensitive files like cmdline, sched*, status are now protected
against other users. As permission checking done in proc_pid_permission()
and files' permissions are left untouched, programs expecting specific
files' modes are not confused.
hidepid=2 means hidepid=1 plus all /proc/PID/ will be invisible to other
users. It doesn't mean that it hides whether a process exists (it can be
learned by other means, e.g. by kill -0 $PID), but it hides process' euid
and egid. It compicates intruder's task of gathering info about running
processes, whether some daemon runs with elevated privileges, whether
another user runs some sensitive program, whether other users run any
program at all, etc.
gid=XXX defines a group that will be able to gather all processes' info
(as in hidepid=0 mode). This group should be used instead of putting
nonroot user in sudoers file or something. However, untrusted users (like
daemons, etc.) which are not supposed to monitor the tasks in the whole
system should not be added to the group.
hidepid=1 or higher is designed to restrict access to procfs files, which
might reveal some sensitive private information like precise keystrokes
timings:
http://www.openwall.com/lists/oss-security/2011/11/05/3
hidepid=1/2 doesn't break monitoring userspace tools. ps, top, pgrep, and
conky gracefully handle EPERM/ENOENT and behave as if the current user is
the only user running processes. pstree shows the process subtree which
contains "pstree" process.
Note: the patch doesn't deal with setuid/setgid issues of keeping
preopened descriptors of procfs files (like
https://lkml.org/lkml/2011/2/7/368). We rely on that the leaked
information like the scheduling counters of setuid apps doesn't threaten
anybody's privacy - only the user started the setuid program may read the
counters.
Signed-off-by: Vasiliy Kulikov <segoon@openwall.com>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Randy Dunlap <rdunlap@xenotime.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Greg KH <greg@kroah.com>
Cc: Theodore Tso <tytso@MIT.EDU>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Cc: James Morris <jmorris@namei.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-01-11 03:11:31 +04:00
|
|
|
if (!has_pid_permissions(pid, task, 2)) {
|
|
|
|
rcu_read_unlock();
|
|
|
|
/*
|
|
|
|
* This doesn't prevent learning whether PID exists,
|
|
|
|
* it only makes getattr() consistent with readdir().
|
|
|
|
*/
|
|
|
|
return -ENOENT;
|
|
|
|
}
|
2006-10-02 13:17:05 +04:00
|
|
|
if ((inode->i_mode == (S_IFDIR|S_IRUGO|S_IXUGO)) ||
|
|
|
|
task_dumpable(task)) {
|
2008-11-14 02:39:19 +03:00
|
|
|
cred = __task_cred(task);
|
|
|
|
stat->uid = cred->euid;
|
|
|
|
stat->gid = cred->egid;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
}
|
2006-10-02 13:17:05 +04:00
|
|
|
rcu_read_unlock();
|
2005-06-23 11:09:43 +04:00
|
|
|
return 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/* dentry stuff */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Exceptional case: normally we are not allowed to unhash a busy
|
|
|
|
* directory. In this case, however, we can do it - no aliasing problems
|
|
|
|
* due to the way we treat inodes.
|
|
|
|
*
|
|
|
|
* Rewrite the inode's ownerships here because the owning task may have
|
|
|
|
* performed a setuid(), etc.
|
2006-06-26 11:25:55 +04:00
|
|
|
*
|
|
|
|
* Before the /proc/pid/status file was created the only way to read
|
|
|
|
* the effective uid of a /process was to stat /proc/pid. Reading
|
|
|
|
* /proc/pid/status is slow enough that procps and other packages
|
|
|
|
* kept stating /proc/pid. To keep the rules in /proc simple I have
|
|
|
|
* made this apply to all per process world readable and executable
|
|
|
|
* directories.
|
2005-04-17 02:20:36 +04:00
|
|
|
*/
|
2012-06-11 00:03:43 +04:00
|
|
|
int pid_revalidate(struct dentry *dentry, unsigned int flags)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2011-01-07 09:49:57 +03:00
|
|
|
struct inode *inode;
|
|
|
|
struct task_struct *task;
|
2008-11-14 02:39:19 +03:00
|
|
|
const struct cred *cred;
|
|
|
|
|
2012-06-11 00:03:43 +04:00
|
|
|
if (flags & LOOKUP_RCU)
|
2011-01-07 09:49:57 +03:00
|
|
|
return -ECHILD;
|
|
|
|
|
|
|
|
inode = dentry->d_inode;
|
|
|
|
task = get_proc_task(inode);
|
|
|
|
|
2006-06-26 11:25:55 +04:00
|
|
|
if (task) {
|
|
|
|
if ((inode->i_mode == (S_IFDIR|S_IRUGO|S_IXUGO)) ||
|
|
|
|
task_dumpable(task)) {
|
2008-11-14 02:39:19 +03:00
|
|
|
rcu_read_lock();
|
|
|
|
cred = __task_cred(task);
|
|
|
|
inode->i_uid = cred->euid;
|
|
|
|
inode->i_gid = cred->egid;
|
|
|
|
rcu_read_unlock();
|
2005-04-17 02:20:36 +04:00
|
|
|
} else {
|
2012-02-09 20:48:21 +04:00
|
|
|
inode->i_uid = GLOBAL_ROOT_UID;
|
|
|
|
inode->i_gid = GLOBAL_ROOT_GID;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
2006-07-15 08:48:03 +04:00
|
|
|
inode->i_mode &= ~(S_ISUID | S_ISGID);
|
2005-04-17 02:20:36 +04:00
|
|
|
security_task_to_inode(task, inode);
|
2006-06-26 11:25:55 +04:00
|
|
|
put_task_struct(task);
|
2005-04-17 02:20:36 +04:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-01-24 03:55:39 +04:00
|
|
|
static inline bool proc_inode_is_dead(struct inode *inode)
|
|
|
|
{
|
|
|
|
return !proc_pid(inode)->tasks[PIDTYPE_PID].first;
|
|
|
|
}
|
|
|
|
|
2013-04-12 04:08:50 +04:00
|
|
|
int pid_delete_dentry(const struct dentry *dentry)
|
|
|
|
{
|
|
|
|
/* Is the task we represent dead?
|
|
|
|
* If so, then don't put the dentry on the lru list,
|
|
|
|
* kill it immediately.
|
|
|
|
*/
|
2014-01-24 03:55:39 +04:00
|
|
|
return proc_inode_is_dead(dentry->d_inode);
|
2013-04-12 04:08:50 +04:00
|
|
|
}
|
|
|
|
|
2010-03-08 03:41:34 +03:00
|
|
|
const struct dentry_operations pid_dentry_operations =
|
2006-10-02 13:17:05 +04:00
|
|
|
{
|
|
|
|
.d_revalidate = pid_revalidate,
|
|
|
|
.d_delete = pid_delete_dentry,
|
|
|
|
};
|
|
|
|
|
|
|
|
/* Lookups */
|
|
|
|
|
2006-10-02 13:18:57 +04:00
|
|
|
/*
|
|
|
|
* Fill a directory entry.
|
|
|
|
*
|
|
|
|
* If possible create the dcache entry and derive our inode number and
|
|
|
|
* file type from dcache entry.
|
|
|
|
*
|
|
|
|
* Since all of the proc inode numbers are dynamically generated, the inode
|
|
|
|
* numbers do not exist until the inode is cache. This means creating the
|
|
|
|
* the dcache entry in readdir is necessary to keep the inode numbers
|
|
|
|
* reported by readdir in sync with the inode numbers reported
|
|
|
|
* by stat.
|
|
|
|
*/
|
2013-05-16 20:07:31 +04:00
|
|
|
bool proc_fill_cache(struct file *file, struct dir_context *ctx,
|
2010-03-08 03:41:34 +03:00
|
|
|
const char *name, int len,
|
2007-05-08 11:26:15 +04:00
|
|
|
instantiate_t instantiate, struct task_struct *task, const void *ptr)
|
2006-10-02 13:18:49 +04:00
|
|
|
{
|
2013-05-16 20:07:31 +04:00
|
|
|
struct dentry *child, *dir = file->f_path.dentry;
|
2013-06-15 11:33:10 +04:00
|
|
|
struct qstr qname = QSTR_INIT(name, len);
|
2006-10-02 13:18:49 +04:00
|
|
|
struct inode *inode;
|
2013-06-15 11:33:10 +04:00
|
|
|
unsigned type;
|
|
|
|
ino_t ino;
|
2006-10-02 13:18:49 +04:00
|
|
|
|
2013-06-15 11:33:10 +04:00
|
|
|
child = d_hash_and_lookup(dir, &qname);
|
2006-10-02 13:18:49 +04:00
|
|
|
if (!child) {
|
2013-06-15 11:33:10 +04:00
|
|
|
child = d_alloc(dir, &qname);
|
|
|
|
if (!child)
|
|
|
|
goto end_instantiate;
|
|
|
|
if (instantiate(dir->d_inode, child, task, ptr) < 0) {
|
|
|
|
dput(child);
|
|
|
|
goto end_instantiate;
|
2006-10-02 13:18:49 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
inode = child->d_inode;
|
2013-06-15 10:26:35 +04:00
|
|
|
ino = inode->i_ino;
|
|
|
|
type = inode->i_mode >> 12;
|
2006-10-02 13:18:49 +04:00
|
|
|
dput(child);
|
2013-05-16 20:07:31 +04:00
|
|
|
return dir_emit(ctx, name, len, ino, type);
|
2013-06-15 11:33:10 +04:00
|
|
|
|
|
|
|
end_instantiate:
|
|
|
|
return dir_emit(ctx, name, len, 1, DT_UNKNOWN);
|
2006-10-02 13:18:49 +04:00
|
|
|
}
|
|
|
|
|
2012-01-11 03:11:23 +04:00
|
|
|
#ifdef CONFIG_CHECKPOINT_RESTORE
|
|
|
|
|
|
|
|
/*
|
|
|
|
* dname_to_vma_addr - maps a dentry name into two unsigned longs
|
|
|
|
* which represent vma start and end addresses.
|
|
|
|
*/
|
|
|
|
static int dname_to_vma_addr(struct dentry *dentry,
|
|
|
|
unsigned long *start, unsigned long *end)
|
|
|
|
{
|
|
|
|
if (sscanf(dentry->d_name.name, "%lx-%lx", start, end) != 2)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2012-06-11 00:03:43 +04:00
|
|
|
static int map_files_d_revalidate(struct dentry *dentry, unsigned int flags)
|
2012-01-11 03:11:23 +04:00
|
|
|
{
|
|
|
|
unsigned long vm_start, vm_end;
|
|
|
|
bool exact_vma_exists = false;
|
|
|
|
struct mm_struct *mm = NULL;
|
|
|
|
struct task_struct *task;
|
|
|
|
const struct cred *cred;
|
|
|
|
struct inode *inode;
|
|
|
|
int status = 0;
|
|
|
|
|
2012-06-11 00:03:43 +04:00
|
|
|
if (flags & LOOKUP_RCU)
|
2012-01-11 03:11:23 +04:00
|
|
|
return -ECHILD;
|
|
|
|
|
|
|
|
if (!capable(CAP_SYS_ADMIN)) {
|
2013-02-20 06:13:55 +04:00
|
|
|
status = -EPERM;
|
2012-01-11 03:11:23 +04:00
|
|
|
goto out_notask;
|
|
|
|
}
|
|
|
|
|
|
|
|
inode = dentry->d_inode;
|
|
|
|
task = get_proc_task(inode);
|
|
|
|
if (!task)
|
|
|
|
goto out_notask;
|
|
|
|
|
2012-06-01 03:26:18 +04:00
|
|
|
mm = mm_access(task, PTRACE_MODE_READ);
|
|
|
|
if (IS_ERR_OR_NULL(mm))
|
2012-01-11 03:11:23 +04:00
|
|
|
goto out;
|
|
|
|
|
|
|
|
if (!dname_to_vma_addr(dentry, &vm_start, &vm_end)) {
|
|
|
|
down_read(&mm->mmap_sem);
|
|
|
|
exact_vma_exists = !!find_exact_vma(mm, vm_start, vm_end);
|
|
|
|
up_read(&mm->mmap_sem);
|
|
|
|
}
|
|
|
|
|
|
|
|
mmput(mm);
|
|
|
|
|
|
|
|
if (exact_vma_exists) {
|
|
|
|
if (task_dumpable(task)) {
|
|
|
|
rcu_read_lock();
|
|
|
|
cred = __task_cred(task);
|
|
|
|
inode->i_uid = cred->euid;
|
|
|
|
inode->i_gid = cred->egid;
|
|
|
|
rcu_read_unlock();
|
|
|
|
} else {
|
2012-02-09 20:48:21 +04:00
|
|
|
inode->i_uid = GLOBAL_ROOT_UID;
|
|
|
|
inode->i_gid = GLOBAL_ROOT_GID;
|
2012-01-11 03:11:23 +04:00
|
|
|
}
|
|
|
|
security_task_to_inode(task, inode);
|
|
|
|
status = 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
|
|
|
put_task_struct(task);
|
|
|
|
|
|
|
|
out_notask:
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct dentry_operations tid_map_files_dentry_operations = {
|
|
|
|
.d_revalidate = map_files_d_revalidate,
|
|
|
|
.d_delete = pid_delete_dentry,
|
|
|
|
};
|
|
|
|
|
|
|
|
static int proc_map_files_get_link(struct dentry *dentry, struct path *path)
|
|
|
|
{
|
|
|
|
unsigned long vm_start, vm_end;
|
|
|
|
struct vm_area_struct *vma;
|
|
|
|
struct task_struct *task;
|
|
|
|
struct mm_struct *mm;
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
rc = -ENOENT;
|
|
|
|
task = get_proc_task(dentry->d_inode);
|
|
|
|
if (!task)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
mm = get_task_mm(task);
|
|
|
|
put_task_struct(task);
|
|
|
|
if (!mm)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
rc = dname_to_vma_addr(dentry, &vm_start, &vm_end);
|
|
|
|
if (rc)
|
|
|
|
goto out_mmput;
|
|
|
|
|
2014-03-11 02:49:45 +04:00
|
|
|
rc = -ENOENT;
|
2012-01-11 03:11:23 +04:00
|
|
|
down_read(&mm->mmap_sem);
|
|
|
|
vma = find_exact_vma(mm, vm_start, vm_end);
|
|
|
|
if (vma && vma->vm_file) {
|
|
|
|
*path = vma->vm_file->f_path;
|
|
|
|
path_get(path);
|
|
|
|
rc = 0;
|
|
|
|
}
|
|
|
|
up_read(&mm->mmap_sem);
|
|
|
|
|
|
|
|
out_mmput:
|
|
|
|
mmput(mm);
|
|
|
|
out:
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
|
|
|
struct map_files_info {
|
2012-08-27 22:55:26 +04:00
|
|
|
fmode_t mode;
|
2012-01-11 03:11:23 +04:00
|
|
|
unsigned long len;
|
|
|
|
unsigned char name[4*sizeof(long)+2]; /* max: %lx-%lx\0 */
|
|
|
|
};
|
|
|
|
|
2013-06-15 11:15:20 +04:00
|
|
|
static int
|
2012-01-11 03:11:23 +04:00
|
|
|
proc_map_files_instantiate(struct inode *dir, struct dentry *dentry,
|
|
|
|
struct task_struct *task, const void *ptr)
|
|
|
|
{
|
2012-08-27 22:55:26 +04:00
|
|
|
fmode_t mode = (fmode_t)(unsigned long)ptr;
|
2012-01-11 03:11:23 +04:00
|
|
|
struct proc_inode *ei;
|
|
|
|
struct inode *inode;
|
|
|
|
|
|
|
|
inode = proc_pid_make_inode(dir->i_sb, task);
|
|
|
|
if (!inode)
|
2013-06-15 11:15:20 +04:00
|
|
|
return -ENOENT;
|
2012-01-11 03:11:23 +04:00
|
|
|
|
|
|
|
ei = PROC_I(inode);
|
|
|
|
ei->op.proc_get_link = proc_map_files_get_link;
|
|
|
|
|
|
|
|
inode->i_op = &proc_pid_link_inode_operations;
|
|
|
|
inode->i_size = 64;
|
|
|
|
inode->i_mode = S_IFLNK;
|
|
|
|
|
2012-08-27 22:55:26 +04:00
|
|
|
if (mode & FMODE_READ)
|
2012-01-11 03:11:23 +04:00
|
|
|
inode->i_mode |= S_IRUSR;
|
2012-08-27 22:55:26 +04:00
|
|
|
if (mode & FMODE_WRITE)
|
2012-01-11 03:11:23 +04:00
|
|
|
inode->i_mode |= S_IWUSR;
|
|
|
|
|
|
|
|
d_set_d_op(dentry, &tid_map_files_dentry_operations);
|
|
|
|
d_add(dentry, inode);
|
|
|
|
|
2013-06-15 11:15:20 +04:00
|
|
|
return 0;
|
2012-01-11 03:11:23 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static struct dentry *proc_map_files_lookup(struct inode *dir,
|
2012-06-11 01:13:09 +04:00
|
|
|
struct dentry *dentry, unsigned int flags)
|
2012-01-11 03:11:23 +04:00
|
|
|
{
|
|
|
|
unsigned long vm_start, vm_end;
|
|
|
|
struct vm_area_struct *vma;
|
|
|
|
struct task_struct *task;
|
2013-06-15 11:15:20 +04:00
|
|
|
int result;
|
2012-01-11 03:11:23 +04:00
|
|
|
struct mm_struct *mm;
|
|
|
|
|
2013-06-15 11:15:20 +04:00
|
|
|
result = -EPERM;
|
2012-01-11 03:11:23 +04:00
|
|
|
if (!capable(CAP_SYS_ADMIN))
|
|
|
|
goto out;
|
|
|
|
|
2013-06-15 11:15:20 +04:00
|
|
|
result = -ENOENT;
|
2012-01-11 03:11:23 +04:00
|
|
|
task = get_proc_task(dir);
|
|
|
|
if (!task)
|
|
|
|
goto out;
|
|
|
|
|
2013-06-15 11:15:20 +04:00
|
|
|
result = -EACCES;
|
2012-05-18 04:03:25 +04:00
|
|
|
if (!ptrace_may_access(task, PTRACE_MODE_READ))
|
2012-01-11 03:11:23 +04:00
|
|
|
goto out_put_task;
|
|
|
|
|
2013-06-15 11:15:20 +04:00
|
|
|
result = -ENOENT;
|
2012-01-11 03:11:23 +04:00
|
|
|
if (dname_to_vma_addr(dentry, &vm_start, &vm_end))
|
2012-05-18 04:03:25 +04:00
|
|
|
goto out_put_task;
|
2012-01-11 03:11:23 +04:00
|
|
|
|
|
|
|
mm = get_task_mm(task);
|
|
|
|
if (!mm)
|
2012-05-18 04:03:25 +04:00
|
|
|
goto out_put_task;
|
2012-01-11 03:11:23 +04:00
|
|
|
|
|
|
|
down_read(&mm->mmap_sem);
|
|
|
|
vma = find_exact_vma(mm, vm_start, vm_end);
|
|
|
|
if (!vma)
|
|
|
|
goto out_no_vma;
|
|
|
|
|
2012-11-27 04:29:42 +04:00
|
|
|
if (vma->vm_file)
|
|
|
|
result = proc_map_files_instantiate(dir, dentry, task,
|
|
|
|
(void *)(unsigned long)vma->vm_file->f_mode);
|
2012-01-11 03:11:23 +04:00
|
|
|
|
|
|
|
out_no_vma:
|
|
|
|
up_read(&mm->mmap_sem);
|
|
|
|
mmput(mm);
|
|
|
|
out_put_task:
|
|
|
|
put_task_struct(task);
|
|
|
|
out:
|
2013-06-15 11:15:20 +04:00
|
|
|
return ERR_PTR(result);
|
2012-01-11 03:11:23 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static const struct inode_operations proc_map_files_inode_operations = {
|
|
|
|
.lookup = proc_map_files_lookup,
|
|
|
|
.permission = proc_fd_permission,
|
|
|
|
.setattr = proc_setattr,
|
|
|
|
};
|
|
|
|
|
|
|
|
static int
|
2013-05-16 20:07:31 +04:00
|
|
|
proc_map_files_readdir(struct file *file, struct dir_context *ctx)
|
2012-01-11 03:11:23 +04:00
|
|
|
{
|
|
|
|
struct vm_area_struct *vma;
|
|
|
|
struct task_struct *task;
|
|
|
|
struct mm_struct *mm;
|
2013-05-16 20:07:31 +04:00
|
|
|
unsigned long nr_files, pos, i;
|
|
|
|
struct flex_array *fa = NULL;
|
|
|
|
struct map_files_info info;
|
|
|
|
struct map_files_info *p;
|
2012-01-11 03:11:23 +04:00
|
|
|
int ret;
|
|
|
|
|
2013-02-20 06:13:55 +04:00
|
|
|
ret = -EPERM;
|
2012-01-11 03:11:23 +04:00
|
|
|
if (!capable(CAP_SYS_ADMIN))
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
ret = -ENOENT;
|
2013-05-16 20:07:31 +04:00
|
|
|
task = get_proc_task(file_inode(file));
|
2012-01-11 03:11:23 +04:00
|
|
|
if (!task)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
ret = -EACCES;
|
2012-05-18 04:03:25 +04:00
|
|
|
if (!ptrace_may_access(task, PTRACE_MODE_READ))
|
2012-01-11 03:11:23 +04:00
|
|
|
goto out_put_task;
|
|
|
|
|
|
|
|
ret = 0;
|
2013-05-16 20:07:31 +04:00
|
|
|
if (!dir_emit_dots(file, ctx))
|
|
|
|
goto out_put_task;
|
2012-01-11 03:11:23 +04:00
|
|
|
|
2013-05-16 20:07:31 +04:00
|
|
|
mm = get_task_mm(task);
|
|
|
|
if (!mm)
|
|
|
|
goto out_put_task;
|
|
|
|
down_read(&mm->mmap_sem);
|
2012-01-11 03:11:23 +04:00
|
|
|
|
2013-05-16 20:07:31 +04:00
|
|
|
nr_files = 0;
|
2012-01-11 03:11:23 +04:00
|
|
|
|
2013-05-16 20:07:31 +04:00
|
|
|
/*
|
|
|
|
* We need two passes here:
|
|
|
|
*
|
|
|
|
* 1) Collect vmas of mapped files with mmap_sem taken
|
|
|
|
* 2) Release mmap_sem and instantiate entries
|
|
|
|
*
|
|
|
|
* otherwise we get lockdep complained, since filldir()
|
|
|
|
* routine might require mmap_sem taken in might_fault().
|
|
|
|
*/
|
2012-01-11 03:11:23 +04:00
|
|
|
|
2013-05-16 20:07:31 +04:00
|
|
|
for (vma = mm->mmap, pos = 2; vma; vma = vma->vm_next) {
|
|
|
|
if (vma->vm_file && ++pos > ctx->pos)
|
|
|
|
nr_files++;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (nr_files) {
|
|
|
|
fa = flex_array_alloc(sizeof(info), nr_files,
|
|
|
|
GFP_KERNEL);
|
|
|
|
if (!fa || flex_array_prealloc(fa, 0, nr_files,
|
|
|
|
GFP_KERNEL)) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
if (fa)
|
|
|
|
flex_array_free(fa);
|
|
|
|
up_read(&mm->mmap_sem);
|
|
|
|
mmput(mm);
|
|
|
|
goto out_put_task;
|
2012-01-11 03:11:23 +04:00
|
|
|
}
|
2013-05-16 20:07:31 +04:00
|
|
|
for (i = 0, vma = mm->mmap, pos = 2; vma;
|
|
|
|
vma = vma->vm_next) {
|
|
|
|
if (!vma->vm_file)
|
|
|
|
continue;
|
|
|
|
if (++pos <= ctx->pos)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
info.mode = vma->vm_file->f_mode;
|
|
|
|
info.len = snprintf(info.name,
|
|
|
|
sizeof(info.name), "%lx-%lx",
|
|
|
|
vma->vm_start, vma->vm_end);
|
|
|
|
if (flex_array_put(fa, i++, &info, GFP_KERNEL))
|
|
|
|
BUG();
|
2012-01-11 03:11:23 +04:00
|
|
|
}
|
|
|
|
}
|
2013-05-16 20:07:31 +04:00
|
|
|
up_read(&mm->mmap_sem);
|
|
|
|
|
|
|
|
for (i = 0; i < nr_files; i++) {
|
|
|
|
p = flex_array_get(fa, i);
|
|
|
|
if (!proc_fill_cache(file, ctx,
|
|
|
|
p->name, p->len,
|
|
|
|
proc_map_files_instantiate,
|
|
|
|
task,
|
|
|
|
(void *)(unsigned long)p->mode))
|
|
|
|
break;
|
|
|
|
ctx->pos++;
|
2012-01-11 03:11:23 +04:00
|
|
|
}
|
2013-05-16 20:07:31 +04:00
|
|
|
if (fa)
|
|
|
|
flex_array_free(fa);
|
|
|
|
mmput(mm);
|
2012-01-11 03:11:23 +04:00
|
|
|
|
|
|
|
out_put_task:
|
|
|
|
put_task_struct(task);
|
|
|
|
out:
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct file_operations proc_map_files_operations = {
|
|
|
|
.read = generic_read_dir,
|
2013-05-16 20:07:31 +04:00
|
|
|
.iterate = proc_map_files_readdir,
|
2012-01-11 03:11:23 +04:00
|
|
|
.llseek = default_llseek,
|
|
|
|
};
|
|
|
|
|
2013-03-11 13:12:45 +04:00
|
|
|
struct timers_private {
|
|
|
|
struct pid *pid;
|
|
|
|
struct task_struct *task;
|
|
|
|
struct sighand_struct *sighand;
|
2013-03-11 13:13:08 +04:00
|
|
|
struct pid_namespace *ns;
|
2013-03-11 13:12:45 +04:00
|
|
|
unsigned long flags;
|
|
|
|
};
|
|
|
|
|
|
|
|
static void *timers_start(struct seq_file *m, loff_t *pos)
|
|
|
|
{
|
|
|
|
struct timers_private *tp = m->private;
|
|
|
|
|
|
|
|
tp->task = get_pid_task(tp->pid, PIDTYPE_PID);
|
|
|
|
if (!tp->task)
|
|
|
|
return ERR_PTR(-ESRCH);
|
|
|
|
|
|
|
|
tp->sighand = lock_task_sighand(tp->task, &tp->flags);
|
|
|
|
if (!tp->sighand)
|
|
|
|
return ERR_PTR(-ESRCH);
|
|
|
|
|
|
|
|
return seq_list_start(&tp->task->signal->posix_timers, *pos);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void *timers_next(struct seq_file *m, void *v, loff_t *pos)
|
|
|
|
{
|
|
|
|
struct timers_private *tp = m->private;
|
|
|
|
return seq_list_next(v, &tp->task->signal->posix_timers, pos);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void timers_stop(struct seq_file *m, void *v)
|
|
|
|
{
|
|
|
|
struct timers_private *tp = m->private;
|
|
|
|
|
|
|
|
if (tp->sighand) {
|
|
|
|
unlock_task_sighand(tp->task, &tp->flags);
|
|
|
|
tp->sighand = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (tp->task) {
|
|
|
|
put_task_struct(tp->task);
|
|
|
|
tp->task = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static int show_timer(struct seq_file *m, void *v)
|
|
|
|
{
|
|
|
|
struct k_itimer *timer;
|
2013-03-11 13:13:08 +04:00
|
|
|
struct timers_private *tp = m->private;
|
|
|
|
int notify;
|
2014-08-09 01:21:33 +04:00
|
|
|
static const char * const nstr[] = {
|
2013-03-11 13:13:08 +04:00
|
|
|
[SIGEV_SIGNAL] = "signal",
|
|
|
|
[SIGEV_NONE] = "none",
|
|
|
|
[SIGEV_THREAD] = "thread",
|
|
|
|
};
|
2013-03-11 13:12:45 +04:00
|
|
|
|
|
|
|
timer = list_entry((struct list_head *)v, struct k_itimer, list);
|
2013-03-11 13:13:08 +04:00
|
|
|
notify = timer->it_sigev_notify;
|
|
|
|
|
2013-03-11 13:12:45 +04:00
|
|
|
seq_printf(m, "ID: %d\n", timer->it_id);
|
2013-03-11 13:13:08 +04:00
|
|
|
seq_printf(m, "signal: %d/%p\n", timer->sigq->info.si_signo,
|
|
|
|
timer->sigq->info.si_value.sival_ptr);
|
|
|
|
seq_printf(m, "notify: %s/%s.%d\n",
|
|
|
|
nstr[notify & ~SIGEV_THREAD_ID],
|
|
|
|
(notify & SIGEV_THREAD_ID) ? "tid" : "pid",
|
|
|
|
pid_nr_ns(timer->it_pid, tp->ns));
|
2013-05-17 02:12:03 +04:00
|
|
|
seq_printf(m, "ClockID: %d\n", timer->it_clock);
|
2013-03-11 13:12:45 +04:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct seq_operations proc_timers_seq_ops = {
|
|
|
|
.start = timers_start,
|
|
|
|
.next = timers_next,
|
|
|
|
.stop = timers_stop,
|
|
|
|
.show = show_timer,
|
|
|
|
};
|
|
|
|
|
|
|
|
static int proc_timers_open(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
struct timers_private *tp;
|
|
|
|
|
|
|
|
tp = __seq_open_private(file, &proc_timers_seq_ops,
|
|
|
|
sizeof(struct timers_private));
|
|
|
|
if (!tp)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
tp->pid = proc_pid(inode);
|
2013-03-11 13:13:08 +04:00
|
|
|
tp->ns = inode->i_sb->s_fs_info;
|
2013-03-11 13:12:45 +04:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct file_operations proc_timers_operations = {
|
|
|
|
.open = proc_timers_open,
|
|
|
|
.read = seq_read,
|
|
|
|
.llseek = seq_lseek,
|
|
|
|
.release = seq_release_private,
|
|
|
|
};
|
2012-01-11 03:11:23 +04:00
|
|
|
#endif /* CONFIG_CHECKPOINT_RESTORE */
|
|
|
|
|
2013-06-15 11:15:20 +04:00
|
|
|
static int proc_pident_instantiate(struct inode *dir,
|
2007-05-08 11:26:15 +04:00
|
|
|
struct dentry *dentry, struct task_struct *task, const void *ptr)
|
2006-10-02 13:18:49 +04:00
|
|
|
{
|
2007-05-08 11:26:15 +04:00
|
|
|
const struct pid_entry *p = ptr;
|
2006-10-02 13:18:49 +04:00
|
|
|
struct inode *inode;
|
|
|
|
struct proc_inode *ei;
|
|
|
|
|
2006-10-02 13:18:49 +04:00
|
|
|
inode = proc_pid_make_inode(dir->i_sb, task);
|
2006-10-02 13:18:49 +04:00
|
|
|
if (!inode)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
ei = PROC_I(inode);
|
|
|
|
inode->i_mode = p->mode;
|
|
|
|
if (S_ISDIR(inode->i_mode))
|
2011-10-28 16:13:29 +04:00
|
|
|
set_nlink(inode, 2); /* Use getattr to fix if necessary */
|
2006-10-02 13:18:49 +04:00
|
|
|
if (p->iop)
|
|
|
|
inode->i_op = p->iop;
|
|
|
|
if (p->fop)
|
|
|
|
inode->i_fop = p->fop;
|
|
|
|
ei->op = p->op;
|
2011-01-07 09:49:55 +03:00
|
|
|
d_set_d_op(dentry, &pid_dentry_operations);
|
2006-10-02 13:18:49 +04:00
|
|
|
d_add(dentry, inode);
|
|
|
|
/* Close the race of the process dying before we return the dentry */
|
2012-06-11 00:03:43 +04:00
|
|
|
if (pid_revalidate(dentry, 0))
|
2013-06-15 11:15:20 +04:00
|
|
|
return 0;
|
2006-10-02 13:18:49 +04:00
|
|
|
out:
|
2013-06-15 11:15:20 +04:00
|
|
|
return -ENOENT;
|
2006-10-02 13:18:49 +04:00
|
|
|
}
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
static struct dentry *proc_pident_lookup(struct inode *dir,
|
|
|
|
struct dentry *dentry,
|
2007-05-08 11:26:15 +04:00
|
|
|
const struct pid_entry *ents,
|
2006-10-02 13:18:56 +04:00
|
|
|
unsigned int nents)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2013-06-15 11:15:20 +04:00
|
|
|
int error;
|
2006-06-26 11:25:55 +04:00
|
|
|
struct task_struct *task = get_proc_task(dir);
|
2007-05-08 11:26:15 +04:00
|
|
|
const struct pid_entry *p, *last;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2013-06-15 11:15:20 +04:00
|
|
|
error = -ENOENT;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-06-26 11:25:55 +04:00
|
|
|
if (!task)
|
|
|
|
goto out_no_task;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-10-02 13:17:07 +04:00
|
|
|
/*
|
|
|
|
* Yes, it does not scale. And it should not. Don't add
|
|
|
|
* new entries into /proc/<tgid>/ without very good reasons.
|
|
|
|
*/
|
2006-10-02 13:18:56 +04:00
|
|
|
last = &ents[nents - 1];
|
|
|
|
for (p = ents; p <= last; p++) {
|
2005-04-17 02:20:36 +04:00
|
|
|
if (p->len != dentry->d_name.len)
|
|
|
|
continue;
|
|
|
|
if (!memcmp(dentry->d_name.name, p->name, p->len))
|
|
|
|
break;
|
|
|
|
}
|
2006-10-02 13:18:56 +04:00
|
|
|
if (p > last)
|
2005-04-17 02:20:36 +04:00
|
|
|
goto out;
|
|
|
|
|
2006-10-02 13:18:49 +04:00
|
|
|
error = proc_pident_instantiate(dir, dentry, task, p);
|
2005-04-17 02:20:36 +04:00
|
|
|
out:
|
2006-06-26 11:25:55 +04:00
|
|
|
put_task_struct(task);
|
|
|
|
out_no_task:
|
2013-06-15 11:15:20 +04:00
|
|
|
return ERR_PTR(error);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2013-05-16 20:07:31 +04:00
|
|
|
static int proc_pident_readdir(struct file *file, struct dir_context *ctx,
|
2007-05-08 11:26:15 +04:00
|
|
|
const struct pid_entry *ents, unsigned int nents)
|
2006-10-02 13:17:05 +04:00
|
|
|
{
|
2013-05-16 20:07:31 +04:00
|
|
|
struct task_struct *task = get_proc_task(file_inode(file));
|
|
|
|
const struct pid_entry *p;
|
2006-10-02 13:17:05 +04:00
|
|
|
|
|
|
|
if (!task)
|
2013-05-16 20:07:31 +04:00
|
|
|
return -ENOENT;
|
2006-10-02 13:17:05 +04:00
|
|
|
|
2013-05-16 20:07:31 +04:00
|
|
|
if (!dir_emit_dots(file, ctx))
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
if (ctx->pos >= nents + 2)
|
|
|
|
goto out;
|
2006-10-02 13:17:05 +04:00
|
|
|
|
2013-05-16 20:07:31 +04:00
|
|
|
for (p = ents + (ctx->pos - 2); p <= ents + nents - 1; p++) {
|
|
|
|
if (!proc_fill_cache(file, ctx, p->name, p->len,
|
|
|
|
proc_pident_instantiate, task, p))
|
|
|
|
break;
|
|
|
|
ctx->pos++;
|
|
|
|
}
|
2006-10-02 13:17:05 +04:00
|
|
|
out:
|
2006-10-02 13:18:49 +04:00
|
|
|
put_task_struct(task);
|
2013-05-16 20:07:31 +04:00
|
|
|
return 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2006-10-02 13:17:05 +04:00
|
|
|
#ifdef CONFIG_SECURITY
|
|
|
|
static ssize_t proc_pid_attr_read(struct file * file, char __user * buf,
|
|
|
|
size_t count, loff_t *ppos)
|
|
|
|
{
|
2013-01-24 02:07:38 +04:00
|
|
|
struct inode * inode = file_inode(file);
|
2007-03-12 19:17:58 +03:00
|
|
|
char *p = NULL;
|
2006-10-02 13:17:05 +04:00
|
|
|
ssize_t length;
|
|
|
|
struct task_struct *task = get_proc_task(inode);
|
|
|
|
|
|
|
|
if (!task)
|
2007-03-12 19:17:58 +03:00
|
|
|
return -ESRCH;
|
2006-10-02 13:17:05 +04:00
|
|
|
|
|
|
|
length = security_getprocattr(task,
|
2006-12-08 13:36:36 +03:00
|
|
|
(char*)file->f_path.dentry->d_name.name,
|
2007-03-12 19:17:58 +03:00
|
|
|
&p);
|
2006-10-02 13:17:05 +04:00
|
|
|
put_task_struct(task);
|
2007-03-12 19:17:58 +03:00
|
|
|
if (length > 0)
|
|
|
|
length = simple_read_from_buffer(buf, count, ppos, p, length);
|
|
|
|
kfree(p);
|
2006-10-02 13:17:05 +04:00
|
|
|
return length;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2006-10-02 13:17:05 +04:00
|
|
|
static ssize_t proc_pid_attr_write(struct file * file, const char __user * buf,
|
|
|
|
size_t count, loff_t *ppos)
|
|
|
|
{
|
2013-01-24 02:07:38 +04:00
|
|
|
struct inode * inode = file_inode(file);
|
2006-10-02 13:17:05 +04:00
|
|
|
char *page;
|
|
|
|
ssize_t length;
|
|
|
|
struct task_struct *task = get_proc_task(inode);
|
|
|
|
|
|
|
|
length = -ESRCH;
|
|
|
|
if (!task)
|
|
|
|
goto out_no_task;
|
|
|
|
if (count > PAGE_SIZE)
|
|
|
|
count = PAGE_SIZE;
|
|
|
|
|
|
|
|
/* No partial writes. */
|
|
|
|
length = -EINVAL;
|
|
|
|
if (*ppos != 0)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
length = -ENOMEM;
|
2007-10-16 12:25:52 +04:00
|
|
|
page = (char*)__get_free_page(GFP_TEMPORARY);
|
2006-10-02 13:17:05 +04:00
|
|
|
if (!page)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
length = -EFAULT;
|
|
|
|
if (copy_from_user(page, buf, count))
|
|
|
|
goto out_free;
|
|
|
|
|
2009-05-08 16:55:27 +04:00
|
|
|
/* Guard against adverse ptrace interaction */
|
2010-10-28 02:34:08 +04:00
|
|
|
length = mutex_lock_interruptible(&task->signal->cred_guard_mutex);
|
2009-05-08 16:55:27 +04:00
|
|
|
if (length < 0)
|
|
|
|
goto out_free;
|
|
|
|
|
2006-10-02 13:17:05 +04:00
|
|
|
length = security_setprocattr(task,
|
2006-12-08 13:36:36 +03:00
|
|
|
(char*)file->f_path.dentry->d_name.name,
|
2006-10-02 13:17:05 +04:00
|
|
|
(void*)page, count);
|
2010-10-28 02:34:08 +04:00
|
|
|
mutex_unlock(&task->signal->cred_guard_mutex);
|
2006-10-02 13:17:05 +04:00
|
|
|
out_free:
|
|
|
|
free_page((unsigned long) page);
|
|
|
|
out:
|
|
|
|
put_task_struct(task);
|
|
|
|
out_no_task:
|
|
|
|
return length;
|
|
|
|
}
|
|
|
|
|
2007-02-12 11:55:34 +03:00
|
|
|
static const struct file_operations proc_pid_attr_operations = {
|
2006-10-02 13:17:05 +04:00
|
|
|
.read = proc_pid_attr_read,
|
|
|
|
.write = proc_pid_attr_write,
|
2010-03-18 01:06:02 +03:00
|
|
|
.llseek = generic_file_llseek,
|
2006-10-02 13:17:05 +04:00
|
|
|
};
|
|
|
|
|
2007-05-08 11:26:15 +04:00
|
|
|
static const struct pid_entry attr_dir_stuff[] = {
|
2008-11-10 01:32:52 +03:00
|
|
|
REG("current", S_IRUGO|S_IWUGO, proc_pid_attr_operations),
|
|
|
|
REG("prev", S_IRUGO, proc_pid_attr_operations),
|
|
|
|
REG("exec", S_IRUGO|S_IWUGO, proc_pid_attr_operations),
|
|
|
|
REG("fscreate", S_IRUGO|S_IWUGO, proc_pid_attr_operations),
|
|
|
|
REG("keycreate", S_IRUGO|S_IWUGO, proc_pid_attr_operations),
|
|
|
|
REG("sockcreate", S_IRUGO|S_IWUGO, proc_pid_attr_operations),
|
2006-10-02 13:17:05 +04:00
|
|
|
};
|
|
|
|
|
2013-05-16 20:07:31 +04:00
|
|
|
static int proc_attr_dir_readdir(struct file *file, struct dir_context *ctx)
|
2006-10-02 13:17:05 +04:00
|
|
|
{
|
2013-05-16 20:07:31 +04:00
|
|
|
return proc_pident_readdir(file, ctx,
|
|
|
|
attr_dir_stuff, ARRAY_SIZE(attr_dir_stuff));
|
2006-10-02 13:17:05 +04:00
|
|
|
}
|
|
|
|
|
2007-02-12 11:55:34 +03:00
|
|
|
static const struct file_operations proc_attr_dir_operations = {
|
2005-04-17 02:20:36 +04:00
|
|
|
.read = generic_read_dir,
|
2013-05-16 20:07:31 +04:00
|
|
|
.iterate = proc_attr_dir_readdir,
|
llseek: automatically add .llseek fop
All file_operations should get a .llseek operation so we can make
nonseekable_open the default for future file operations without a
.llseek pointer.
The three cases that we can automatically detect are no_llseek, seq_lseek
and default_llseek. For cases where we can we can automatically prove that
the file offset is always ignored, we use noop_llseek, which maintains
the current behavior of not returning an error from a seek.
New drivers should normally not use noop_llseek but instead use no_llseek
and call nonseekable_open at open time. Existing drivers can be converted
to do the same when the maintainer knows for certain that no user code
relies on calling seek on the device file.
The generated code is often incorrectly indented and right now contains
comments that clarify for each added line why a specific variant was
chosen. In the version that gets submitted upstream, the comments will
be gone and I will manually fix the indentation, because there does not
seem to be a way to do that using coccinelle.
Some amount of new code is currently sitting in linux-next that should get
the same modifications, which I will do at the end of the merge window.
Many thanks to Julia Lawall for helping me learn to write a semantic
patch that does all this.
===== begin semantic patch =====
// This adds an llseek= method to all file operations,
// as a preparation for making no_llseek the default.
//
// The rules are
// - use no_llseek explicitly if we do nonseekable_open
// - use seq_lseek for sequential files
// - use default_llseek if we know we access f_pos
// - use noop_llseek if we know we don't access f_pos,
// but we still want to allow users to call lseek
//
@ open1 exists @
identifier nested_open;
@@
nested_open(...)
{
<+...
nonseekable_open(...)
...+>
}
@ open exists@
identifier open_f;
identifier i, f;
identifier open1.nested_open;
@@
int open_f(struct inode *i, struct file *f)
{
<+...
(
nonseekable_open(...)
|
nested_open(...)
)
...+>
}
@ read disable optional_qualifier exists @
identifier read_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
expression E;
identifier func;
@@
ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
{
<+...
(
*off = E
|
*off += E
|
func(..., off, ...)
|
E = *off
)
...+>
}
@ read_no_fpos disable optional_qualifier exists @
identifier read_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
@@
ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
{
... when != off
}
@ write @
identifier write_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
expression E;
identifier func;
@@
ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
{
<+...
(
*off = E
|
*off += E
|
func(..., off, ...)
|
E = *off
)
...+>
}
@ write_no_fpos @
identifier write_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
@@
ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
{
... when != off
}
@ fops0 @
identifier fops;
@@
struct file_operations fops = {
...
};
@ has_llseek depends on fops0 @
identifier fops0.fops;
identifier llseek_f;
@@
struct file_operations fops = {
...
.llseek = llseek_f,
...
};
@ has_read depends on fops0 @
identifier fops0.fops;
identifier read_f;
@@
struct file_operations fops = {
...
.read = read_f,
...
};
@ has_write depends on fops0 @
identifier fops0.fops;
identifier write_f;
@@
struct file_operations fops = {
...
.write = write_f,
...
};
@ has_open depends on fops0 @
identifier fops0.fops;
identifier open_f;
@@
struct file_operations fops = {
...
.open = open_f,
...
};
// use no_llseek if we call nonseekable_open
////////////////////////////////////////////
@ nonseekable1 depends on !has_llseek && has_open @
identifier fops0.fops;
identifier nso ~= "nonseekable_open";
@@
struct file_operations fops = {
... .open = nso, ...
+.llseek = no_llseek, /* nonseekable */
};
@ nonseekable2 depends on !has_llseek @
identifier fops0.fops;
identifier open.open_f;
@@
struct file_operations fops = {
... .open = open_f, ...
+.llseek = no_llseek, /* open uses nonseekable */
};
// use seq_lseek for sequential files
/////////////////////////////////////
@ seq depends on !has_llseek @
identifier fops0.fops;
identifier sr ~= "seq_read";
@@
struct file_operations fops = {
... .read = sr, ...
+.llseek = seq_lseek, /* we have seq_read */
};
// use default_llseek if there is a readdir
///////////////////////////////////////////
@ fops1 depends on !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier readdir_e;
@@
// any other fop is used that changes pos
struct file_operations fops = {
... .readdir = readdir_e, ...
+.llseek = default_llseek, /* readdir is present */
};
// use default_llseek if at least one of read/write touches f_pos
/////////////////////////////////////////////////////////////////
@ fops2 depends on !fops1 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read.read_f;
@@
// read fops use offset
struct file_operations fops = {
... .read = read_f, ...
+.llseek = default_llseek, /* read accesses f_pos */
};
@ fops3 depends on !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier write.write_f;
@@
// write fops use offset
struct file_operations fops = {
... .write = write_f, ...
+ .llseek = default_llseek, /* write accesses f_pos */
};
// Use noop_llseek if neither read nor write accesses f_pos
///////////////////////////////////////////////////////////
@ fops4 depends on !fops1 && !fops2 && !fops3 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read_no_fpos.read_f;
identifier write_no_fpos.write_f;
@@
// write fops use offset
struct file_operations fops = {
...
.write = write_f,
.read = read_f,
...
+.llseek = noop_llseek, /* read and write both use no f_pos */
};
@ depends on has_write && !has_read && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier write_no_fpos.write_f;
@@
struct file_operations fops = {
... .write = write_f, ...
+.llseek = noop_llseek, /* write uses no f_pos */
};
@ depends on has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read_no_fpos.read_f;
@@
struct file_operations fops = {
... .read = read_f, ...
+.llseek = noop_llseek, /* read uses no f_pos */
};
@ depends on !has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
@@
struct file_operations fops = {
...
+.llseek = noop_llseek, /* no read or write fn */
};
===== End semantic patch =====
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Cc: Julia Lawall <julia@diku.dk>
Cc: Christoph Hellwig <hch@infradead.org>
2010-08-15 20:52:59 +04:00
|
|
|
.llseek = default_llseek,
|
2005-04-17 02:20:36 +04:00
|
|
|
};
|
|
|
|
|
2006-10-02 13:18:50 +04:00
|
|
|
static struct dentry *proc_attr_dir_lookup(struct inode *dir,
|
2012-06-11 01:13:09 +04:00
|
|
|
struct dentry *dentry, unsigned int flags)
|
2006-10-02 13:17:05 +04:00
|
|
|
{
|
2006-10-02 13:18:56 +04:00
|
|
|
return proc_pident_lookup(dir, dentry,
|
|
|
|
attr_dir_stuff, ARRAY_SIZE(attr_dir_stuff));
|
2006-10-02 13:17:05 +04:00
|
|
|
}
|
|
|
|
|
2007-02-12 11:55:40 +03:00
|
|
|
static const struct inode_operations proc_attr_dir_inode_operations = {
|
2006-10-02 13:18:50 +04:00
|
|
|
.lookup = proc_attr_dir_lookup,
|
2006-06-26 11:25:55 +04:00
|
|
|
.getattr = pid_getattr,
|
2006-07-15 23:26:45 +04:00
|
|
|
.setattr = proc_setattr,
|
2005-04-17 02:20:36 +04:00
|
|
|
};
|
|
|
|
|
2006-10-02 13:17:05 +04:00
|
|
|
#endif
|
|
|
|
|
2009-12-16 03:47:37 +03:00
|
|
|
#ifdef CONFIG_ELF_CORE
|
2007-07-19 12:48:28 +04:00
|
|
|
static ssize_t proc_coredump_filter_read(struct file *file, char __user *buf,
|
|
|
|
size_t count, loff_t *ppos)
|
|
|
|
{
|
2013-01-24 02:07:38 +04:00
|
|
|
struct task_struct *task = get_proc_task(file_inode(file));
|
2007-07-19 12:48:28 +04:00
|
|
|
struct mm_struct *mm;
|
|
|
|
char buffer[PROC_NUMBUF];
|
|
|
|
size_t len;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (!task)
|
|
|
|
return -ESRCH;
|
|
|
|
|
|
|
|
ret = 0;
|
|
|
|
mm = get_task_mm(task);
|
|
|
|
if (mm) {
|
|
|
|
len = snprintf(buffer, sizeof(buffer), "%08lx\n",
|
|
|
|
((mm->flags & MMF_DUMP_FILTER_MASK) >>
|
|
|
|
MMF_DUMP_FILTER_SHIFT));
|
|
|
|
mmput(mm);
|
|
|
|
ret = simple_read_from_buffer(buf, count, ppos, buffer, len);
|
|
|
|
}
|
|
|
|
|
|
|
|
put_task_struct(task);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t proc_coredump_filter_write(struct file *file,
|
|
|
|
const char __user *buf,
|
|
|
|
size_t count,
|
|
|
|
loff_t *ppos)
|
|
|
|
{
|
|
|
|
struct task_struct *task;
|
|
|
|
struct mm_struct *mm;
|
|
|
|
char buffer[PROC_NUMBUF], *end;
|
|
|
|
unsigned int val;
|
|
|
|
int ret;
|
|
|
|
int i;
|
|
|
|
unsigned long mask;
|
|
|
|
|
|
|
|
ret = -EFAULT;
|
|
|
|
memset(buffer, 0, sizeof(buffer));
|
|
|
|
if (count > sizeof(buffer) - 1)
|
|
|
|
count = sizeof(buffer) - 1;
|
|
|
|
if (copy_from_user(buffer, buf, count))
|
|
|
|
goto out_no_task;
|
|
|
|
|
|
|
|
ret = -EINVAL;
|
|
|
|
val = (unsigned int)simple_strtoul(buffer, &end, 0);
|
|
|
|
if (*end == '\n')
|
|
|
|
end++;
|
|
|
|
if (end - buffer == 0)
|
|
|
|
goto out_no_task;
|
|
|
|
|
|
|
|
ret = -ESRCH;
|
2013-01-24 02:07:38 +04:00
|
|
|
task = get_proc_task(file_inode(file));
|
2007-07-19 12:48:28 +04:00
|
|
|
if (!task)
|
|
|
|
goto out_no_task;
|
|
|
|
|
|
|
|
ret = end - buffer;
|
|
|
|
mm = get_task_mm(task);
|
|
|
|
if (!mm)
|
|
|
|
goto out_no_mm;
|
|
|
|
|
|
|
|
for (i = 0, mask = 1; i < MMF_DUMP_FILTER_BITS; i++, mask <<= 1) {
|
|
|
|
if (val & mask)
|
|
|
|
set_bit(i + MMF_DUMP_FILTER_SHIFT, &mm->flags);
|
|
|
|
else
|
|
|
|
clear_bit(i + MMF_DUMP_FILTER_SHIFT, &mm->flags);
|
|
|
|
}
|
|
|
|
|
|
|
|
mmput(mm);
|
|
|
|
out_no_mm:
|
|
|
|
put_task_struct(task);
|
|
|
|
out_no_task:
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct file_operations proc_coredump_filter_operations = {
|
|
|
|
.read = proc_coredump_filter_read,
|
|
|
|
.write = proc_coredump_filter_write,
|
2010-03-18 01:06:02 +03:00
|
|
|
.llseek = generic_file_llseek,
|
2007-07-19 12:48:28 +04:00
|
|
|
};
|
|
|
|
#endif
|
|
|
|
|
2006-12-10 13:19:48 +03:00
|
|
|
#ifdef CONFIG_TASK_IO_ACCOUNTING
|
2014-08-09 01:21:50 +04:00
|
|
|
static int do_io_accounting(struct task_struct *task, struct seq_file *m, int whole)
|
2008-07-25 12:48:49 +04:00
|
|
|
{
|
2008-07-28 02:48:12 +04:00
|
|
|
struct task_io_accounting acct = task->ioac;
|
2008-07-27 19:29:15 +04:00
|
|
|
unsigned long flags;
|
2011-07-27 03:08:38 +04:00
|
|
|
int result;
|
2008-07-27 19:29:15 +04:00
|
|
|
|
2011-07-27 03:08:38 +04:00
|
|
|
result = mutex_lock_killable(&task->signal->cred_guard_mutex);
|
|
|
|
if (result)
|
|
|
|
return result;
|
|
|
|
|
|
|
|
if (!ptrace_may_access(task, PTRACE_MODE_READ)) {
|
|
|
|
result = -EACCES;
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
2011-06-24 16:08:38 +04:00
|
|
|
|
2008-07-27 19:29:15 +04:00
|
|
|
if (whole && lock_task_sighand(task, &flags)) {
|
|
|
|
struct task_struct *t = task;
|
|
|
|
|
|
|
|
task_io_accounting_add(&acct, &task->signal->ioac);
|
|
|
|
while_each_thread(task, t)
|
|
|
|
task_io_accounting_add(&acct, &t->ioac);
|
|
|
|
|
|
|
|
unlock_task_sighand(task, &flags);
|
2008-07-25 12:48:49 +04:00
|
|
|
}
|
2014-08-09 01:21:50 +04:00
|
|
|
result = seq_printf(m,
|
2006-12-10 13:19:48 +03:00
|
|
|
"rchar: %llu\n"
|
|
|
|
"wchar: %llu\n"
|
|
|
|
"syscr: %llu\n"
|
|
|
|
"syscw: %llu\n"
|
|
|
|
"read_bytes: %llu\n"
|
|
|
|
"write_bytes: %llu\n"
|
|
|
|
"cancelled_write_bytes: %llu\n",
|
2008-08-06 00:01:34 +04:00
|
|
|
(unsigned long long)acct.rchar,
|
|
|
|
(unsigned long long)acct.wchar,
|
|
|
|
(unsigned long long)acct.syscr,
|
|
|
|
(unsigned long long)acct.syscw,
|
|
|
|
(unsigned long long)acct.read_bytes,
|
|
|
|
(unsigned long long)acct.write_bytes,
|
|
|
|
(unsigned long long)acct.cancelled_write_bytes);
|
2011-07-27 03:08:38 +04:00
|
|
|
out_unlock:
|
|
|
|
mutex_unlock(&task->signal->cred_guard_mutex);
|
|
|
|
return result;
|
2008-07-25 12:48:49 +04:00
|
|
|
}
|
|
|
|
|
2014-08-09 01:21:50 +04:00
|
|
|
static int proc_tid_io_accounting(struct seq_file *m, struct pid_namespace *ns,
|
|
|
|
struct pid *pid, struct task_struct *task)
|
2008-07-25 12:48:49 +04:00
|
|
|
{
|
2014-08-09 01:21:50 +04:00
|
|
|
return do_io_accounting(task, m, 0);
|
2006-12-10 13:19:48 +03:00
|
|
|
}
|
2008-07-25 12:48:49 +04:00
|
|
|
|
2014-08-09 01:21:50 +04:00
|
|
|
static int proc_tgid_io_accounting(struct seq_file *m, struct pid_namespace *ns,
|
|
|
|
struct pid *pid, struct task_struct *task)
|
2008-07-25 12:48:49 +04:00
|
|
|
{
|
2014-08-09 01:21:50 +04:00
|
|
|
return do_io_accounting(task, m, 1);
|
2008-07-25 12:48:49 +04:00
|
|
|
}
|
|
|
|
#endif /* CONFIG_TASK_IO_ACCOUNTING */
|
2006-12-10 13:19:48 +03:00
|
|
|
|
2011-11-17 12:11:58 +04:00
|
|
|
#ifdef CONFIG_USER_NS
|
|
|
|
static int proc_id_map_open(struct inode *inode, struct file *file,
|
2014-08-09 01:21:22 +04:00
|
|
|
const struct seq_operations *seq_ops)
|
2011-11-17 12:11:58 +04:00
|
|
|
{
|
|
|
|
struct user_namespace *ns = NULL;
|
|
|
|
struct task_struct *task;
|
|
|
|
struct seq_file *seq;
|
|
|
|
int ret = -EINVAL;
|
|
|
|
|
|
|
|
task = get_proc_task(inode);
|
|
|
|
if (task) {
|
|
|
|
rcu_read_lock();
|
|
|
|
ns = get_user_ns(task_cred_xxx(task, user_ns));
|
|
|
|
rcu_read_unlock();
|
|
|
|
put_task_struct(task);
|
|
|
|
}
|
|
|
|
if (!ns)
|
|
|
|
goto err;
|
|
|
|
|
|
|
|
ret = seq_open(file, seq_ops);
|
|
|
|
if (ret)
|
|
|
|
goto err_put_ns;
|
|
|
|
|
|
|
|
seq = file->private_data;
|
|
|
|
seq->private = ns;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
err_put_ns:
|
|
|
|
put_user_ns(ns);
|
|
|
|
err:
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int proc_id_map_release(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
struct seq_file *seq = file->private_data;
|
|
|
|
struct user_namespace *ns = seq->private;
|
|
|
|
put_user_ns(ns);
|
|
|
|
return seq_release(inode, file);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int proc_uid_map_open(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
return proc_id_map_open(inode, file, &proc_uid_seq_operations);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int proc_gid_map_open(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
return proc_id_map_open(inode, file, &proc_gid_seq_operations);
|
|
|
|
}
|
|
|
|
|
2012-08-30 12:24:05 +04:00
|
|
|
static int proc_projid_map_open(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
return proc_id_map_open(inode, file, &proc_projid_seq_operations);
|
|
|
|
}
|
|
|
|
|
2011-11-17 12:11:58 +04:00
|
|
|
static const struct file_operations proc_uid_map_operations = {
|
|
|
|
.open = proc_uid_map_open,
|
|
|
|
.write = proc_uid_map_write,
|
|
|
|
.read = seq_read,
|
|
|
|
.llseek = seq_lseek,
|
|
|
|
.release = proc_id_map_release,
|
|
|
|
};
|
|
|
|
|
|
|
|
static const struct file_operations proc_gid_map_operations = {
|
|
|
|
.open = proc_gid_map_open,
|
|
|
|
.write = proc_gid_map_write,
|
|
|
|
.read = seq_read,
|
|
|
|
.llseek = seq_lseek,
|
|
|
|
.release = proc_id_map_release,
|
|
|
|
};
|
2012-08-30 12:24:05 +04:00
|
|
|
|
|
|
|
static const struct file_operations proc_projid_map_operations = {
|
|
|
|
.open = proc_projid_map_open,
|
|
|
|
.write = proc_projid_map_write,
|
|
|
|
.read = seq_read,
|
|
|
|
.llseek = seq_lseek,
|
|
|
|
.release = proc_id_map_release,
|
|
|
|
};
|
2014-12-02 21:27:26 +03:00
|
|
|
|
|
|
|
static int proc_setgroups_open(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
struct user_namespace *ns = NULL;
|
|
|
|
struct task_struct *task;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = -ESRCH;
|
|
|
|
task = get_proc_task(inode);
|
|
|
|
if (task) {
|
|
|
|
rcu_read_lock();
|
|
|
|
ns = get_user_ns(task_cred_xxx(task, user_ns));
|
|
|
|
rcu_read_unlock();
|
|
|
|
put_task_struct(task);
|
|
|
|
}
|
|
|
|
if (!ns)
|
|
|
|
goto err;
|
|
|
|
|
|
|
|
if (file->f_mode & FMODE_WRITE) {
|
|
|
|
ret = -EACCES;
|
|
|
|
if (!ns_capable(ns, CAP_SYS_ADMIN))
|
|
|
|
goto err_put_ns;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = single_open(file, &proc_setgroups_show, ns);
|
|
|
|
if (ret)
|
|
|
|
goto err_put_ns;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
err_put_ns:
|
|
|
|
put_user_ns(ns);
|
|
|
|
err:
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int proc_setgroups_release(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
struct seq_file *seq = file->private_data;
|
|
|
|
struct user_namespace *ns = seq->private;
|
|
|
|
int ret = single_release(inode, file);
|
|
|
|
put_user_ns(ns);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct file_operations proc_setgroups_operations = {
|
|
|
|
.open = proc_setgroups_open,
|
|
|
|
.write = proc_setgroups_write,
|
|
|
|
.read = seq_read,
|
|
|
|
.llseek = seq_lseek,
|
|
|
|
.release = proc_setgroups_release,
|
|
|
|
};
|
2011-11-17 12:11:58 +04:00
|
|
|
#endif /* CONFIG_USER_NS */
|
|
|
|
|
2008-10-06 03:11:58 +04:00
|
|
|
static int proc_pid_personality(struct seq_file *m, struct pid_namespace *ns,
|
|
|
|
struct pid *pid, struct task_struct *task)
|
|
|
|
{
|
2011-03-23 22:52:50 +03:00
|
|
|
int err = lock_trace(task);
|
|
|
|
if (!err) {
|
|
|
|
seq_printf(m, "%08x\n", task->personality);
|
|
|
|
unlock_trace(task);
|
|
|
|
}
|
|
|
|
return err;
|
2008-10-06 03:11:58 +04:00
|
|
|
}
|
|
|
|
|
2006-10-02 13:17:05 +04:00
|
|
|
/*
|
|
|
|
* Thread groups
|
|
|
|
*/
|
2007-02-12 11:55:34 +03:00
|
|
|
static const struct file_operations proc_task_operations;
|
2007-02-12 11:55:40 +03:00
|
|
|
static const struct inode_operations proc_task_inode_operations;
|
2006-10-02 13:17:07 +04:00
|
|
|
|
2007-05-08 11:26:15 +04:00
|
|
|
static const struct pid_entry tgid_base_stuff[] = {
|
2008-11-10 01:32:52 +03:00
|
|
|
DIR("task", S_IRUGO|S_IXUGO, proc_task_inode_operations, proc_task_operations),
|
|
|
|
DIR("fd", S_IRUSR|S_IXUSR, proc_fd_inode_operations, proc_fd_operations),
|
2012-01-11 03:11:23 +04:00
|
|
|
#ifdef CONFIG_CHECKPOINT_RESTORE
|
|
|
|
DIR("map_files", S_IRUSR|S_IXUSR, proc_map_files_inode_operations, proc_map_files_operations),
|
|
|
|
#endif
|
2008-11-10 01:32:52 +03:00
|
|
|
DIR("fdinfo", S_IRUSR|S_IXUSR, proc_fdinfo_inode_operations, proc_fdinfo_operations),
|
2010-03-08 03:41:34 +03:00
|
|
|
DIR("ns", S_IRUSR|S_IXUGO, proc_ns_dir_inode_operations, proc_ns_dir_operations),
|
2008-03-12 04:03:35 +03:00
|
|
|
#ifdef CONFIG_NET
|
2008-11-10 01:32:52 +03:00
|
|
|
DIR("net", S_IRUGO|S_IXUGO, proc_net_inode_operations, proc_net_operations),
|
2008-03-12 04:03:35 +03:00
|
|
|
#endif
|
2008-11-10 01:32:52 +03:00
|
|
|
REG("environ", S_IRUSR, proc_environ_operations),
|
2014-08-09 01:21:35 +04:00
|
|
|
ONE("auxv", S_IRUSR, proc_pid_auxv),
|
2008-11-10 01:32:52 +03:00
|
|
|
ONE("status", S_IRUGO, proc_pid_status),
|
2014-04-08 02:38:36 +04:00
|
|
|
ONE("personality", S_IRUSR, proc_pid_personality),
|
2014-08-09 01:21:37 +04:00
|
|
|
ONE("limits", S_IRUGO, proc_pid_limits),
|
2007-07-09 20:52:00 +04:00
|
|
|
#ifdef CONFIG_SCHED_DEBUG
|
2008-11-10 01:32:52 +03:00
|
|
|
REG("sched", S_IRUGO|S_IWUSR, proc_pid_sched_operations),
|
sched: Add 'autogroup' scheduling feature: automated per session task groups
A recurring complaint from CFS users is that parallel kbuild has
a negative impact on desktop interactivity. This patch
implements an idea from Linus, to automatically create task
groups. Currently, only per session autogroups are implemented,
but the patch leaves the way open for enhancement.
Implementation: each task's signal struct contains an inherited
pointer to a refcounted autogroup struct containing a task group
pointer, the default for all tasks pointing to the
init_task_group. When a task calls setsid(), a new task group
is created, the process is moved into the new task group, and a
reference to the preveious task group is dropped. Child
processes inherit this task group thereafter, and increase it's
refcount. When the last thread of a process exits, the
process's reference is dropped, such that when the last process
referencing an autogroup exits, the autogroup is destroyed.
At runqueue selection time, IFF a task has no cgroup assignment,
its current autogroup is used.
Autogroup bandwidth is controllable via setting it's nice level
through the proc filesystem:
cat /proc/<pid>/autogroup
Displays the task's group and the group's nice level.
echo <nice level> > /proc/<pid>/autogroup
Sets the task group's shares to the weight of nice <level> task.
Setting nice level is rate limited for !admin users due to the
abuse risk of task group locking.
The feature is enabled from boot by default if
CONFIG_SCHED_AUTOGROUP=y is selected, but can be disabled via
the boot option noautogroup, and can also be turned on/off on
the fly via:
echo [01] > /proc/sys/kernel/sched_autogroup_enabled
... which will automatically move tasks to/from the root task group.
Signed-off-by: Mike Galbraith <efault@gmx.de>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Markus Trippelsdorf <markus@trippelsdorf.de>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Paul Turner <pjt@google.com>
Cc: Oleg Nesterov <oleg@redhat.com>
[ Removed the task_group_path() debug code, and fixed !EVENTFD build failure. ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
LKML-Reference: <1290281700.28711.9.camel@maggy.simson.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-11-30 16:18:03 +03:00
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_SCHED_AUTOGROUP
|
|
|
|
REG("autogroup", S_IRUGO|S_IWUSR, proc_pid_sched_autogroup_operations),
|
2008-07-26 06:46:00 +04:00
|
|
|
#endif
|
2009-12-15 05:00:05 +03:00
|
|
|
REG("comm", S_IRUGO|S_IWUSR, proc_pid_set_comm_operations),
|
2008-07-26 06:46:00 +04:00
|
|
|
#ifdef CONFIG_HAVE_ARCH_TRACEHOOK
|
2014-08-09 01:21:39 +04:00
|
|
|
ONE("syscall", S_IRUSR, proc_pid_syscall),
|
2007-07-09 20:52:00 +04:00
|
|
|
#endif
|
2014-08-09 01:21:41 +04:00
|
|
|
ONE("cmdline", S_IRUGO, proc_pid_cmdline),
|
2008-11-10 01:32:52 +03:00
|
|
|
ONE("stat", S_IRUGO, proc_tgid_stat),
|
|
|
|
ONE("statm", S_IRUGO, proc_pid_statm),
|
procfs: mark thread stack correctly in proc/<pid>/maps
Stack for a new thread is mapped by userspace code and passed via
sys_clone. This memory is currently seen as anonymous in
/proc/<pid>/maps, which makes it difficult to ascertain which mappings
are being used for thread stacks. This patch uses the individual task
stack pointers to determine which vmas are actually thread stacks.
For a multithreaded program like the following:
#include <pthread.h>
void *thread_main(void *foo)
{
while(1);
}
int main()
{
pthread_t t;
pthread_create(&t, NULL, thread_main, NULL);
pthread_join(t, NULL);
}
proc/PID/maps looks like the following:
00400000-00401000 r-xp 00000000 fd:0a 3671804 /home/siddhesh/a.out
00600000-00601000 rw-p 00000000 fd:0a 3671804 /home/siddhesh/a.out
019ef000-01a10000 rw-p 00000000 00:00 0 [heap]
7f8a44491000-7f8a44492000 ---p 00000000 00:00 0
7f8a44492000-7f8a44c92000 rw-p 00000000 00:00 0
7f8a44c92000-7f8a44e3d000 r-xp 00000000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a44e3d000-7f8a4503d000 ---p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a4503d000-7f8a45041000 r--p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45041000-7f8a45043000 rw-p 001af000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45043000-7f8a45048000 rw-p 00000000 00:00 0
7f8a45048000-7f8a4505f000 r-xp 00000000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4505f000-7f8a4525e000 ---p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525e000-7f8a4525f000 r--p 00016000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525f000-7f8a45260000 rw-p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a45260000-7f8a45264000 rw-p 00000000 00:00 0
7f8a45264000-7f8a45286000 r-xp 00000000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45457000-7f8a4545a000 rw-p 00000000 00:00 0
7f8a45484000-7f8a45485000 rw-p 00000000 00:00 0
7f8a45485000-7f8a45486000 r--p 00021000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45486000-7f8a45487000 rw-p 00022000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45487000-7f8a45488000 rw-p 00000000 00:00 0
7fff6273b000-7fff6275c000 rw-p 00000000 00:00 0 [stack]
7fff627ff000-7fff62800000 r-xp 00000000 00:00 0 [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]
Here, one could guess that 7f8a44492000-7f8a44c92000 is a stack since
the earlier vma that has no permissions (7f8a44e3d000-7f8a4503d000) but
that is not always a reliable way to find out which vma is a thread
stack. Also, /proc/PID/maps and /proc/PID/task/TID/maps has the same
content.
With this patch in place, /proc/PID/task/TID/maps are treated as 'maps
as the task would see it' and hence, only the vma that that task uses as
stack is marked as [stack]. All other 'stack' vmas are marked as
anonymous memory. /proc/PID/maps acts as a thread group level view,
where all thread stack vmas are marked as [stack:TID] where TID is the
process ID of the task that uses that vma as stack, while the process
stack is marked as [stack].
So /proc/PID/maps will look like this:
00400000-00401000 r-xp 00000000 fd:0a 3671804 /home/siddhesh/a.out
00600000-00601000 rw-p 00000000 fd:0a 3671804 /home/siddhesh/a.out
019ef000-01a10000 rw-p 00000000 00:00 0 [heap]
7f8a44491000-7f8a44492000 ---p 00000000 00:00 0
7f8a44492000-7f8a44c92000 rw-p 00000000 00:00 0 [stack:1442]
7f8a44c92000-7f8a44e3d000 r-xp 00000000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a44e3d000-7f8a4503d000 ---p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a4503d000-7f8a45041000 r--p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45041000-7f8a45043000 rw-p 001af000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45043000-7f8a45048000 rw-p 00000000 00:00 0
7f8a45048000-7f8a4505f000 r-xp 00000000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4505f000-7f8a4525e000 ---p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525e000-7f8a4525f000 r--p 00016000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525f000-7f8a45260000 rw-p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a45260000-7f8a45264000 rw-p 00000000 00:00 0
7f8a45264000-7f8a45286000 r-xp 00000000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45457000-7f8a4545a000 rw-p 00000000 00:00 0
7f8a45484000-7f8a45485000 rw-p 00000000 00:00 0
7f8a45485000-7f8a45486000 r--p 00021000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45486000-7f8a45487000 rw-p 00022000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45487000-7f8a45488000 rw-p 00000000 00:00 0
7fff6273b000-7fff6275c000 rw-p 00000000 00:00 0 [stack]
7fff627ff000-7fff62800000 r-xp 00000000 00:00 0 [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]
Thus marking all vmas that are used as stacks by the threads in the
thread group along with the process stack. The task level maps will
however like this:
00400000-00401000 r-xp 00000000 fd:0a 3671804 /home/siddhesh/a.out
00600000-00601000 rw-p 00000000 fd:0a 3671804 /home/siddhesh/a.out
019ef000-01a10000 rw-p 00000000 00:00 0 [heap]
7f8a44491000-7f8a44492000 ---p 00000000 00:00 0
7f8a44492000-7f8a44c92000 rw-p 00000000 00:00 0 [stack]
7f8a44c92000-7f8a44e3d000 r-xp 00000000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a44e3d000-7f8a4503d000 ---p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a4503d000-7f8a45041000 r--p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45041000-7f8a45043000 rw-p 001af000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45043000-7f8a45048000 rw-p 00000000 00:00 0
7f8a45048000-7f8a4505f000 r-xp 00000000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4505f000-7f8a4525e000 ---p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525e000-7f8a4525f000 r--p 00016000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525f000-7f8a45260000 rw-p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a45260000-7f8a45264000 rw-p 00000000 00:00 0
7f8a45264000-7f8a45286000 r-xp 00000000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45457000-7f8a4545a000 rw-p 00000000 00:00 0
7f8a45484000-7f8a45485000 rw-p 00000000 00:00 0
7f8a45485000-7f8a45486000 r--p 00021000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45486000-7f8a45487000 rw-p 00022000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45487000-7f8a45488000 rw-p 00000000 00:00 0
7fff6273b000-7fff6275c000 rw-p 00000000 00:00 0
7fff627ff000-7fff62800000 r-xp 00000000 00:00 0 [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]
where only the vma that is being used as a stack by *that* task is
marked as [stack].
Analogous changes have been made to /proc/PID/smaps,
/proc/PID/numa_maps, /proc/PID/task/TID/smaps and
/proc/PID/task/TID/numa_maps. Relevant snippets from smaps and
numa_maps:
[siddhesh@localhost ~ ]$ pgrep a.out
1441
[siddhesh@localhost ~ ]$ cat /proc/1441/smaps | grep "\[stack"
7f8a44492000-7f8a44c92000 rw-p 00000000 00:00 0 [stack:1442]
7fff6273b000-7fff6275c000 rw-p 00000000 00:00 0 [stack]
[siddhesh@localhost ~ ]$ cat /proc/1441/task/1442/smaps | grep "\[stack"
7f8a44492000-7f8a44c92000 rw-p 00000000 00:00 0 [stack]
[siddhesh@localhost ~ ]$ cat /proc/1441/task/1441/smaps | grep "\[stack"
7fff6273b000-7fff6275c000 rw-p 00000000 00:00 0 [stack]
[siddhesh@localhost ~ ]$ cat /proc/1441/numa_maps | grep "stack"
7f8a44492000 default stack:1442 anon=2 dirty=2 N0=2
7fff6273a000 default stack anon=3 dirty=3 N0=3
[siddhesh@localhost ~ ]$ cat /proc/1441/task/1442/numa_maps | grep "stack"
7f8a44492000 default stack anon=2 dirty=2 N0=2
[siddhesh@localhost ~ ]$ cat /proc/1441/task/1441/numa_maps | grep "stack"
7fff6273a000 default stack anon=3 dirty=3 N0=3
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix build]
Signed-off-by: Siddhesh Poyarekar <siddhesh.poyarekar@gmail.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@gmail.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Jamie Lokier <jamie@shareable.org>
Cc: Mike Frysinger <vapier@gentoo.org>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Matt Mackall <mpm@selenic.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-03-22 03:34:04 +04:00
|
|
|
REG("maps", S_IRUGO, proc_pid_maps_operations),
|
2006-10-02 13:17:05 +04:00
|
|
|
#ifdef CONFIG_NUMA
|
procfs: mark thread stack correctly in proc/<pid>/maps
Stack for a new thread is mapped by userspace code and passed via
sys_clone. This memory is currently seen as anonymous in
/proc/<pid>/maps, which makes it difficult to ascertain which mappings
are being used for thread stacks. This patch uses the individual task
stack pointers to determine which vmas are actually thread stacks.
For a multithreaded program like the following:
#include <pthread.h>
void *thread_main(void *foo)
{
while(1);
}
int main()
{
pthread_t t;
pthread_create(&t, NULL, thread_main, NULL);
pthread_join(t, NULL);
}
proc/PID/maps looks like the following:
00400000-00401000 r-xp 00000000 fd:0a 3671804 /home/siddhesh/a.out
00600000-00601000 rw-p 00000000 fd:0a 3671804 /home/siddhesh/a.out
019ef000-01a10000 rw-p 00000000 00:00 0 [heap]
7f8a44491000-7f8a44492000 ---p 00000000 00:00 0
7f8a44492000-7f8a44c92000 rw-p 00000000 00:00 0
7f8a44c92000-7f8a44e3d000 r-xp 00000000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a44e3d000-7f8a4503d000 ---p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a4503d000-7f8a45041000 r--p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45041000-7f8a45043000 rw-p 001af000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45043000-7f8a45048000 rw-p 00000000 00:00 0
7f8a45048000-7f8a4505f000 r-xp 00000000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4505f000-7f8a4525e000 ---p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525e000-7f8a4525f000 r--p 00016000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525f000-7f8a45260000 rw-p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a45260000-7f8a45264000 rw-p 00000000 00:00 0
7f8a45264000-7f8a45286000 r-xp 00000000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45457000-7f8a4545a000 rw-p 00000000 00:00 0
7f8a45484000-7f8a45485000 rw-p 00000000 00:00 0
7f8a45485000-7f8a45486000 r--p 00021000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45486000-7f8a45487000 rw-p 00022000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45487000-7f8a45488000 rw-p 00000000 00:00 0
7fff6273b000-7fff6275c000 rw-p 00000000 00:00 0 [stack]
7fff627ff000-7fff62800000 r-xp 00000000 00:00 0 [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]
Here, one could guess that 7f8a44492000-7f8a44c92000 is a stack since
the earlier vma that has no permissions (7f8a44e3d000-7f8a4503d000) but
that is not always a reliable way to find out which vma is a thread
stack. Also, /proc/PID/maps and /proc/PID/task/TID/maps has the same
content.
With this patch in place, /proc/PID/task/TID/maps are treated as 'maps
as the task would see it' and hence, only the vma that that task uses as
stack is marked as [stack]. All other 'stack' vmas are marked as
anonymous memory. /proc/PID/maps acts as a thread group level view,
where all thread stack vmas are marked as [stack:TID] where TID is the
process ID of the task that uses that vma as stack, while the process
stack is marked as [stack].
So /proc/PID/maps will look like this:
00400000-00401000 r-xp 00000000 fd:0a 3671804 /home/siddhesh/a.out
00600000-00601000 rw-p 00000000 fd:0a 3671804 /home/siddhesh/a.out
019ef000-01a10000 rw-p 00000000 00:00 0 [heap]
7f8a44491000-7f8a44492000 ---p 00000000 00:00 0
7f8a44492000-7f8a44c92000 rw-p 00000000 00:00 0 [stack:1442]
7f8a44c92000-7f8a44e3d000 r-xp 00000000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a44e3d000-7f8a4503d000 ---p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a4503d000-7f8a45041000 r--p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45041000-7f8a45043000 rw-p 001af000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45043000-7f8a45048000 rw-p 00000000 00:00 0
7f8a45048000-7f8a4505f000 r-xp 00000000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4505f000-7f8a4525e000 ---p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525e000-7f8a4525f000 r--p 00016000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525f000-7f8a45260000 rw-p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a45260000-7f8a45264000 rw-p 00000000 00:00 0
7f8a45264000-7f8a45286000 r-xp 00000000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45457000-7f8a4545a000 rw-p 00000000 00:00 0
7f8a45484000-7f8a45485000 rw-p 00000000 00:00 0
7f8a45485000-7f8a45486000 r--p 00021000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45486000-7f8a45487000 rw-p 00022000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45487000-7f8a45488000 rw-p 00000000 00:00 0
7fff6273b000-7fff6275c000 rw-p 00000000 00:00 0 [stack]
7fff627ff000-7fff62800000 r-xp 00000000 00:00 0 [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]
Thus marking all vmas that are used as stacks by the threads in the
thread group along with the process stack. The task level maps will
however like this:
00400000-00401000 r-xp 00000000 fd:0a 3671804 /home/siddhesh/a.out
00600000-00601000 rw-p 00000000 fd:0a 3671804 /home/siddhesh/a.out
019ef000-01a10000 rw-p 00000000 00:00 0 [heap]
7f8a44491000-7f8a44492000 ---p 00000000 00:00 0
7f8a44492000-7f8a44c92000 rw-p 00000000 00:00 0 [stack]
7f8a44c92000-7f8a44e3d000 r-xp 00000000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a44e3d000-7f8a4503d000 ---p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a4503d000-7f8a45041000 r--p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45041000-7f8a45043000 rw-p 001af000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45043000-7f8a45048000 rw-p 00000000 00:00 0
7f8a45048000-7f8a4505f000 r-xp 00000000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4505f000-7f8a4525e000 ---p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525e000-7f8a4525f000 r--p 00016000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525f000-7f8a45260000 rw-p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a45260000-7f8a45264000 rw-p 00000000 00:00 0
7f8a45264000-7f8a45286000 r-xp 00000000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45457000-7f8a4545a000 rw-p 00000000 00:00 0
7f8a45484000-7f8a45485000 rw-p 00000000 00:00 0
7f8a45485000-7f8a45486000 r--p 00021000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45486000-7f8a45487000 rw-p 00022000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45487000-7f8a45488000 rw-p 00000000 00:00 0
7fff6273b000-7fff6275c000 rw-p 00000000 00:00 0
7fff627ff000-7fff62800000 r-xp 00000000 00:00 0 [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]
where only the vma that is being used as a stack by *that* task is
marked as [stack].
Analogous changes have been made to /proc/PID/smaps,
/proc/PID/numa_maps, /proc/PID/task/TID/smaps and
/proc/PID/task/TID/numa_maps. Relevant snippets from smaps and
numa_maps:
[siddhesh@localhost ~ ]$ pgrep a.out
1441
[siddhesh@localhost ~ ]$ cat /proc/1441/smaps | grep "\[stack"
7f8a44492000-7f8a44c92000 rw-p 00000000 00:00 0 [stack:1442]
7fff6273b000-7fff6275c000 rw-p 00000000 00:00 0 [stack]
[siddhesh@localhost ~ ]$ cat /proc/1441/task/1442/smaps | grep "\[stack"
7f8a44492000-7f8a44c92000 rw-p 00000000 00:00 0 [stack]
[siddhesh@localhost ~ ]$ cat /proc/1441/task/1441/smaps | grep "\[stack"
7fff6273b000-7fff6275c000 rw-p 00000000 00:00 0 [stack]
[siddhesh@localhost ~ ]$ cat /proc/1441/numa_maps | grep "stack"
7f8a44492000 default stack:1442 anon=2 dirty=2 N0=2
7fff6273a000 default stack anon=3 dirty=3 N0=3
[siddhesh@localhost ~ ]$ cat /proc/1441/task/1442/numa_maps | grep "stack"
7f8a44492000 default stack anon=2 dirty=2 N0=2
[siddhesh@localhost ~ ]$ cat /proc/1441/task/1441/numa_maps | grep "stack"
7fff6273a000 default stack anon=3 dirty=3 N0=3
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix build]
Signed-off-by: Siddhesh Poyarekar <siddhesh.poyarekar@gmail.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@gmail.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Jamie Lokier <jamie@shareable.org>
Cc: Mike Frysinger <vapier@gentoo.org>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Matt Mackall <mpm@selenic.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-03-22 03:34:04 +04:00
|
|
|
REG("numa_maps", S_IRUGO, proc_pid_numa_maps_operations),
|
2006-10-02 13:17:05 +04:00
|
|
|
#endif
|
2008-11-10 01:32:52 +03:00
|
|
|
REG("mem", S_IRUSR|S_IWUSR, proc_mem_operations),
|
|
|
|
LNK("cwd", proc_cwd_link),
|
|
|
|
LNK("root", proc_root_link),
|
|
|
|
LNK("exe", proc_exe_link),
|
|
|
|
REG("mounts", S_IRUGO, proc_mounts_operations),
|
|
|
|
REG("mountinfo", S_IRUGO, proc_mountinfo_operations),
|
|
|
|
REG("mountstats", S_IRUSR, proc_mountstats_operations),
|
2008-02-05 09:29:07 +03:00
|
|
|
#ifdef CONFIG_PROC_PAGE_MONITOR
|
2008-11-10 01:32:52 +03:00
|
|
|
REG("clear_refs", S_IWUSR, proc_clear_refs_operations),
|
procfs: mark thread stack correctly in proc/<pid>/maps
Stack for a new thread is mapped by userspace code and passed via
sys_clone. This memory is currently seen as anonymous in
/proc/<pid>/maps, which makes it difficult to ascertain which mappings
are being used for thread stacks. This patch uses the individual task
stack pointers to determine which vmas are actually thread stacks.
For a multithreaded program like the following:
#include <pthread.h>
void *thread_main(void *foo)
{
while(1);
}
int main()
{
pthread_t t;
pthread_create(&t, NULL, thread_main, NULL);
pthread_join(t, NULL);
}
proc/PID/maps looks like the following:
00400000-00401000 r-xp 00000000 fd:0a 3671804 /home/siddhesh/a.out
00600000-00601000 rw-p 00000000 fd:0a 3671804 /home/siddhesh/a.out
019ef000-01a10000 rw-p 00000000 00:00 0 [heap]
7f8a44491000-7f8a44492000 ---p 00000000 00:00 0
7f8a44492000-7f8a44c92000 rw-p 00000000 00:00 0
7f8a44c92000-7f8a44e3d000 r-xp 00000000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a44e3d000-7f8a4503d000 ---p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a4503d000-7f8a45041000 r--p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45041000-7f8a45043000 rw-p 001af000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45043000-7f8a45048000 rw-p 00000000 00:00 0
7f8a45048000-7f8a4505f000 r-xp 00000000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4505f000-7f8a4525e000 ---p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525e000-7f8a4525f000 r--p 00016000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525f000-7f8a45260000 rw-p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a45260000-7f8a45264000 rw-p 00000000 00:00 0
7f8a45264000-7f8a45286000 r-xp 00000000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45457000-7f8a4545a000 rw-p 00000000 00:00 0
7f8a45484000-7f8a45485000 rw-p 00000000 00:00 0
7f8a45485000-7f8a45486000 r--p 00021000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45486000-7f8a45487000 rw-p 00022000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45487000-7f8a45488000 rw-p 00000000 00:00 0
7fff6273b000-7fff6275c000 rw-p 00000000 00:00 0 [stack]
7fff627ff000-7fff62800000 r-xp 00000000 00:00 0 [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]
Here, one could guess that 7f8a44492000-7f8a44c92000 is a stack since
the earlier vma that has no permissions (7f8a44e3d000-7f8a4503d000) but
that is not always a reliable way to find out which vma is a thread
stack. Also, /proc/PID/maps and /proc/PID/task/TID/maps has the same
content.
With this patch in place, /proc/PID/task/TID/maps are treated as 'maps
as the task would see it' and hence, only the vma that that task uses as
stack is marked as [stack]. All other 'stack' vmas are marked as
anonymous memory. /proc/PID/maps acts as a thread group level view,
where all thread stack vmas are marked as [stack:TID] where TID is the
process ID of the task that uses that vma as stack, while the process
stack is marked as [stack].
So /proc/PID/maps will look like this:
00400000-00401000 r-xp 00000000 fd:0a 3671804 /home/siddhesh/a.out
00600000-00601000 rw-p 00000000 fd:0a 3671804 /home/siddhesh/a.out
019ef000-01a10000 rw-p 00000000 00:00 0 [heap]
7f8a44491000-7f8a44492000 ---p 00000000 00:00 0
7f8a44492000-7f8a44c92000 rw-p 00000000 00:00 0 [stack:1442]
7f8a44c92000-7f8a44e3d000 r-xp 00000000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a44e3d000-7f8a4503d000 ---p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a4503d000-7f8a45041000 r--p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45041000-7f8a45043000 rw-p 001af000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45043000-7f8a45048000 rw-p 00000000 00:00 0
7f8a45048000-7f8a4505f000 r-xp 00000000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4505f000-7f8a4525e000 ---p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525e000-7f8a4525f000 r--p 00016000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525f000-7f8a45260000 rw-p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a45260000-7f8a45264000 rw-p 00000000 00:00 0
7f8a45264000-7f8a45286000 r-xp 00000000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45457000-7f8a4545a000 rw-p 00000000 00:00 0
7f8a45484000-7f8a45485000 rw-p 00000000 00:00 0
7f8a45485000-7f8a45486000 r--p 00021000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45486000-7f8a45487000 rw-p 00022000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45487000-7f8a45488000 rw-p 00000000 00:00 0
7fff6273b000-7fff6275c000 rw-p 00000000 00:00 0 [stack]
7fff627ff000-7fff62800000 r-xp 00000000 00:00 0 [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]
Thus marking all vmas that are used as stacks by the threads in the
thread group along with the process stack. The task level maps will
however like this:
00400000-00401000 r-xp 00000000 fd:0a 3671804 /home/siddhesh/a.out
00600000-00601000 rw-p 00000000 fd:0a 3671804 /home/siddhesh/a.out
019ef000-01a10000 rw-p 00000000 00:00 0 [heap]
7f8a44491000-7f8a44492000 ---p 00000000 00:00 0
7f8a44492000-7f8a44c92000 rw-p 00000000 00:00 0 [stack]
7f8a44c92000-7f8a44e3d000 r-xp 00000000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a44e3d000-7f8a4503d000 ---p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a4503d000-7f8a45041000 r--p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45041000-7f8a45043000 rw-p 001af000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45043000-7f8a45048000 rw-p 00000000 00:00 0
7f8a45048000-7f8a4505f000 r-xp 00000000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4505f000-7f8a4525e000 ---p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525e000-7f8a4525f000 r--p 00016000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525f000-7f8a45260000 rw-p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a45260000-7f8a45264000 rw-p 00000000 00:00 0
7f8a45264000-7f8a45286000 r-xp 00000000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45457000-7f8a4545a000 rw-p 00000000 00:00 0
7f8a45484000-7f8a45485000 rw-p 00000000 00:00 0
7f8a45485000-7f8a45486000 r--p 00021000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45486000-7f8a45487000 rw-p 00022000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45487000-7f8a45488000 rw-p 00000000 00:00 0
7fff6273b000-7fff6275c000 rw-p 00000000 00:00 0
7fff627ff000-7fff62800000 r-xp 00000000 00:00 0 [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]
where only the vma that is being used as a stack by *that* task is
marked as [stack].
Analogous changes have been made to /proc/PID/smaps,
/proc/PID/numa_maps, /proc/PID/task/TID/smaps and
/proc/PID/task/TID/numa_maps. Relevant snippets from smaps and
numa_maps:
[siddhesh@localhost ~ ]$ pgrep a.out
1441
[siddhesh@localhost ~ ]$ cat /proc/1441/smaps | grep "\[stack"
7f8a44492000-7f8a44c92000 rw-p 00000000 00:00 0 [stack:1442]
7fff6273b000-7fff6275c000 rw-p 00000000 00:00 0 [stack]
[siddhesh@localhost ~ ]$ cat /proc/1441/task/1442/smaps | grep "\[stack"
7f8a44492000-7f8a44c92000 rw-p 00000000 00:00 0 [stack]
[siddhesh@localhost ~ ]$ cat /proc/1441/task/1441/smaps | grep "\[stack"
7fff6273b000-7fff6275c000 rw-p 00000000 00:00 0 [stack]
[siddhesh@localhost ~ ]$ cat /proc/1441/numa_maps | grep "stack"
7f8a44492000 default stack:1442 anon=2 dirty=2 N0=2
7fff6273a000 default stack anon=3 dirty=3 N0=3
[siddhesh@localhost ~ ]$ cat /proc/1441/task/1442/numa_maps | grep "stack"
7f8a44492000 default stack anon=2 dirty=2 N0=2
[siddhesh@localhost ~ ]$ cat /proc/1441/task/1441/numa_maps | grep "stack"
7fff6273a000 default stack anon=3 dirty=3 N0=3
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix build]
Signed-off-by: Siddhesh Poyarekar <siddhesh.poyarekar@gmail.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@gmail.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Jamie Lokier <jamie@shareable.org>
Cc: Mike Frysinger <vapier@gentoo.org>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Matt Mackall <mpm@selenic.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-03-22 03:34:04 +04:00
|
|
|
REG("smaps", S_IRUGO, proc_pid_smaps_operations),
|
2014-04-08 02:38:38 +04:00
|
|
|
REG("pagemap", S_IRUSR, proc_pagemap_operations),
|
2006-10-02 13:17:05 +04:00
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_SECURITY
|
2008-11-10 01:32:52 +03:00
|
|
|
DIR("attr", S_IRUGO|S_IXUGO, proc_attr_dir_inode_operations, proc_attr_dir_operations),
|
2006-10-02 13:17:05 +04:00
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_KALLSYMS
|
2014-08-09 01:21:44 +04:00
|
|
|
ONE("wchan", S_IRUGO, proc_pid_wchan),
|
2006-10-02 13:17:05 +04:00
|
|
|
#endif
|
2008-11-10 11:26:08 +03:00
|
|
|
#ifdef CONFIG_STACKTRACE
|
2014-04-08 02:38:36 +04:00
|
|
|
ONE("stack", S_IRUSR, proc_pid_stack),
|
2006-10-02 13:17:05 +04:00
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_SCHEDSTATS
|
2014-08-09 01:21:46 +04:00
|
|
|
ONE("schedstat", S_IRUGO, proc_pid_schedstat),
|
2006-10-02 13:17:05 +04:00
|
|
|
#endif
|
2008-01-25 23:08:34 +03:00
|
|
|
#ifdef CONFIG_LATENCYTOP
|
2008-11-10 01:32:52 +03:00
|
|
|
REG("latency", S_IRUGO, proc_lstats_operations),
|
2008-01-25 23:08:34 +03:00
|
|
|
#endif
|
2007-10-19 10:39:39 +04:00
|
|
|
#ifdef CONFIG_PROC_PID_CPUSET
|
2014-09-18 12:03:36 +04:00
|
|
|
ONE("cpuset", S_IRUGO, proc_cpuset_show),
|
2007-10-19 10:39:35 +04:00
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_CGROUPS
|
2014-09-18 12:03:15 +04:00
|
|
|
ONE("cgroup", S_IRUGO, proc_cgroup_show),
|
2006-10-02 13:17:05 +04:00
|
|
|
#endif
|
2014-08-09 01:21:48 +04:00
|
|
|
ONE("oom_score", S_IRUGO, proc_oom_score),
|
2012-11-13 05:53:04 +04:00
|
|
|
REG("oom_adj", S_IRUGO|S_IWUSR, proc_oom_adj_operations),
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
|
|
|
REG("oom_score_adj", S_IRUGO|S_IWUSR, proc_oom_score_adj_operations),
|
2006-10-02 13:17:05 +04:00
|
|
|
#ifdef CONFIG_AUDITSYSCALL
|
2008-11-10 01:32:52 +03:00
|
|
|
REG("loginuid", S_IWUSR|S_IRUGO, proc_loginuid_operations),
|
|
|
|
REG("sessionid", S_IRUGO, proc_sessionid_operations),
|
2006-10-02 13:17:05 +04:00
|
|
|
#endif
|
2006-12-08 13:39:47 +03:00
|
|
|
#ifdef CONFIG_FAULT_INJECTION
|
2008-11-10 01:32:52 +03:00
|
|
|
REG("make-it-fail", S_IRUGO|S_IWUSR, proc_fault_inject_operations),
|
2006-12-08 13:39:47 +03:00
|
|
|
#endif
|
2009-12-16 03:47:37 +03:00
|
|
|
#ifdef CONFIG_ELF_CORE
|
2008-11-10 01:32:52 +03:00
|
|
|
REG("coredump_filter", S_IRUGO|S_IWUSR, proc_coredump_filter_operations),
|
2007-07-19 12:48:28 +04:00
|
|
|
#endif
|
2006-12-10 13:19:48 +03:00
|
|
|
#ifdef CONFIG_TASK_IO_ACCOUNTING
|
2014-08-09 01:21:50 +04:00
|
|
|
ONE("io", S_IRUSR, proc_tgid_io_accounting),
|
2006-12-10 13:19:48 +03:00
|
|
|
#endif
|
arch/tile: more /proc and /sys file support
This change introduces a few of the less controversial /proc and
/proc/sys interfaces for tile, along with sysfs attributes for
various things that were originally proposed as /proc/tile files.
It also adjusts the "hardwall" proc API.
Arnd Bergmann reviewed the initial arch/tile submission, which
included a complete set of all the /proc/tile and /proc/sys/tile
knobs that we had added in a somewhat ad hoc way during initial
development, and provided feedback on where most of them should go.
One knob turned out to be similar enough to the existing
/proc/sys/debug/exception-trace that it was re-implemented to use
that model instead.
Another knob was /proc/tile/grid, which reported the "grid" dimensions
of a tile chip (e.g. 8x8 processors = 64-core chip). Arnd suggested
looking at sysfs for that, so this change moves that information
to a pair of sysfs attributes (chip_width and chip_height) in the
/sys/devices/system/cpu directory. We also put the "chip_serial"
and "chip_revision" information from our old /proc/tile/board file
as attributes in /sys/devices/system/cpu.
Other information collected via hypervisor APIs is now placed in
/sys/hypervisor. We create a /sys/hypervisor/type file (holding the
constant string "tilera") to be parallel with the Xen use of
/sys/hypervisor/type holding "xen". We create three top-level files,
"version" (the hypervisor's own version), "config_version" (the
version of the configuration file), and "hvconfig" (the contents of
the configuration file). The remaining information from our old
/proc/tile/board and /proc/tile/switch files becomes an attribute
group appearing under /sys/hypervisor/board/.
Finally, after some feedback from Arnd Bergmann for the previous
version of this patch, the /proc/tile/hardwall file is split up into
two conceptual parts. First, a directory /proc/tile/hardwall/ which
contains one file per active hardwall, each file named after the
hardwall's ID and holding a cpulist that says which cpus are enclosed by
the hardwall. Second, a /proc/PID file "hardwall" that is either
empty (for non-hardwall-using processes) or contains the hardwall ID.
Finally, this change pushes the /proc/sys/tile/unaligned_fixup/
directory, with knobs controlling the kernel code for handling the
fixup of unaligned exceptions.
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
2011-05-26 20:40:09 +04:00
|
|
|
#ifdef CONFIG_HARDWALL
|
2014-08-09 01:21:52 +04:00
|
|
|
ONE("hardwall", S_IRUGO, proc_pid_hardwall),
|
arch/tile: more /proc and /sys file support
This change introduces a few of the less controversial /proc and
/proc/sys interfaces for tile, along with sysfs attributes for
various things that were originally proposed as /proc/tile files.
It also adjusts the "hardwall" proc API.
Arnd Bergmann reviewed the initial arch/tile submission, which
included a complete set of all the /proc/tile and /proc/sys/tile
knobs that we had added in a somewhat ad hoc way during initial
development, and provided feedback on where most of them should go.
One knob turned out to be similar enough to the existing
/proc/sys/debug/exception-trace that it was re-implemented to use
that model instead.
Another knob was /proc/tile/grid, which reported the "grid" dimensions
of a tile chip (e.g. 8x8 processors = 64-core chip). Arnd suggested
looking at sysfs for that, so this change moves that information
to a pair of sysfs attributes (chip_width and chip_height) in the
/sys/devices/system/cpu directory. We also put the "chip_serial"
and "chip_revision" information from our old /proc/tile/board file
as attributes in /sys/devices/system/cpu.
Other information collected via hypervisor APIs is now placed in
/sys/hypervisor. We create a /sys/hypervisor/type file (holding the
constant string "tilera") to be parallel with the Xen use of
/sys/hypervisor/type holding "xen". We create three top-level files,
"version" (the hypervisor's own version), "config_version" (the
version of the configuration file), and "hvconfig" (the contents of
the configuration file). The remaining information from our old
/proc/tile/board and /proc/tile/switch files becomes an attribute
group appearing under /sys/hypervisor/board/.
Finally, after some feedback from Arnd Bergmann for the previous
version of this patch, the /proc/tile/hardwall file is split up into
two conceptual parts. First, a directory /proc/tile/hardwall/ which
contains one file per active hardwall, each file named after the
hardwall's ID and holding a cpulist that says which cpus are enclosed by
the hardwall. Second, a /proc/PID file "hardwall" that is either
empty (for non-hardwall-using processes) or contains the hardwall ID.
Finally, this change pushes the /proc/sys/tile/unaligned_fixup/
directory, with knobs controlling the kernel code for handling the
fixup of unaligned exceptions.
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
2011-05-26 20:40:09 +04:00
|
|
|
#endif
|
2011-11-17 12:11:58 +04:00
|
|
|
#ifdef CONFIG_USER_NS
|
|
|
|
REG("uid_map", S_IRUGO|S_IWUSR, proc_uid_map_operations),
|
|
|
|
REG("gid_map", S_IRUGO|S_IWUSR, proc_gid_map_operations),
|
2012-08-30 12:24:05 +04:00
|
|
|
REG("projid_map", S_IRUGO|S_IWUSR, proc_projid_map_operations),
|
2014-12-02 21:27:26 +03:00
|
|
|
REG("setgroups", S_IRUGO|S_IWUSR, proc_setgroups_operations),
|
2011-11-17 12:11:58 +04:00
|
|
|
#endif
|
2013-03-11 13:12:45 +04:00
|
|
|
#ifdef CONFIG_CHECKPOINT_RESTORE
|
|
|
|
REG("timers", S_IRUGO, proc_timers_operations),
|
|
|
|
#endif
|
2006-10-02 13:17:05 +04:00
|
|
|
};
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2013-05-16 20:07:31 +04:00
|
|
|
static int proc_tgid_base_readdir(struct file *file, struct dir_context *ctx)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2013-05-16 20:07:31 +04:00
|
|
|
return proc_pident_readdir(file, ctx,
|
|
|
|
tgid_base_stuff, ARRAY_SIZE(tgid_base_stuff));
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2007-02-12 11:55:34 +03:00
|
|
|
static const struct file_operations proc_tgid_base_operations = {
|
2005-04-17 02:20:36 +04:00
|
|
|
.read = generic_read_dir,
|
2013-05-16 20:07:31 +04:00
|
|
|
.iterate = proc_tgid_base_readdir,
|
llseek: automatically add .llseek fop
All file_operations should get a .llseek operation so we can make
nonseekable_open the default for future file operations without a
.llseek pointer.
The three cases that we can automatically detect are no_llseek, seq_lseek
and default_llseek. For cases where we can we can automatically prove that
the file offset is always ignored, we use noop_llseek, which maintains
the current behavior of not returning an error from a seek.
New drivers should normally not use noop_llseek but instead use no_llseek
and call nonseekable_open at open time. Existing drivers can be converted
to do the same when the maintainer knows for certain that no user code
relies on calling seek on the device file.
The generated code is often incorrectly indented and right now contains
comments that clarify for each added line why a specific variant was
chosen. In the version that gets submitted upstream, the comments will
be gone and I will manually fix the indentation, because there does not
seem to be a way to do that using coccinelle.
Some amount of new code is currently sitting in linux-next that should get
the same modifications, which I will do at the end of the merge window.
Many thanks to Julia Lawall for helping me learn to write a semantic
patch that does all this.
===== begin semantic patch =====
// This adds an llseek= method to all file operations,
// as a preparation for making no_llseek the default.
//
// The rules are
// - use no_llseek explicitly if we do nonseekable_open
// - use seq_lseek for sequential files
// - use default_llseek if we know we access f_pos
// - use noop_llseek if we know we don't access f_pos,
// but we still want to allow users to call lseek
//
@ open1 exists @
identifier nested_open;
@@
nested_open(...)
{
<+...
nonseekable_open(...)
...+>
}
@ open exists@
identifier open_f;
identifier i, f;
identifier open1.nested_open;
@@
int open_f(struct inode *i, struct file *f)
{
<+...
(
nonseekable_open(...)
|
nested_open(...)
)
...+>
}
@ read disable optional_qualifier exists @
identifier read_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
expression E;
identifier func;
@@
ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
{
<+...
(
*off = E
|
*off += E
|
func(..., off, ...)
|
E = *off
)
...+>
}
@ read_no_fpos disable optional_qualifier exists @
identifier read_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
@@
ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
{
... when != off
}
@ write @
identifier write_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
expression E;
identifier func;
@@
ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
{
<+...
(
*off = E
|
*off += E
|
func(..., off, ...)
|
E = *off
)
...+>
}
@ write_no_fpos @
identifier write_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
@@
ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
{
... when != off
}
@ fops0 @
identifier fops;
@@
struct file_operations fops = {
...
};
@ has_llseek depends on fops0 @
identifier fops0.fops;
identifier llseek_f;
@@
struct file_operations fops = {
...
.llseek = llseek_f,
...
};
@ has_read depends on fops0 @
identifier fops0.fops;
identifier read_f;
@@
struct file_operations fops = {
...
.read = read_f,
...
};
@ has_write depends on fops0 @
identifier fops0.fops;
identifier write_f;
@@
struct file_operations fops = {
...
.write = write_f,
...
};
@ has_open depends on fops0 @
identifier fops0.fops;
identifier open_f;
@@
struct file_operations fops = {
...
.open = open_f,
...
};
// use no_llseek if we call nonseekable_open
////////////////////////////////////////////
@ nonseekable1 depends on !has_llseek && has_open @
identifier fops0.fops;
identifier nso ~= "nonseekable_open";
@@
struct file_operations fops = {
... .open = nso, ...
+.llseek = no_llseek, /* nonseekable */
};
@ nonseekable2 depends on !has_llseek @
identifier fops0.fops;
identifier open.open_f;
@@
struct file_operations fops = {
... .open = open_f, ...
+.llseek = no_llseek, /* open uses nonseekable */
};
// use seq_lseek for sequential files
/////////////////////////////////////
@ seq depends on !has_llseek @
identifier fops0.fops;
identifier sr ~= "seq_read";
@@
struct file_operations fops = {
... .read = sr, ...
+.llseek = seq_lseek, /* we have seq_read */
};
// use default_llseek if there is a readdir
///////////////////////////////////////////
@ fops1 depends on !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier readdir_e;
@@
// any other fop is used that changes pos
struct file_operations fops = {
... .readdir = readdir_e, ...
+.llseek = default_llseek, /* readdir is present */
};
// use default_llseek if at least one of read/write touches f_pos
/////////////////////////////////////////////////////////////////
@ fops2 depends on !fops1 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read.read_f;
@@
// read fops use offset
struct file_operations fops = {
... .read = read_f, ...
+.llseek = default_llseek, /* read accesses f_pos */
};
@ fops3 depends on !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier write.write_f;
@@
// write fops use offset
struct file_operations fops = {
... .write = write_f, ...
+ .llseek = default_llseek, /* write accesses f_pos */
};
// Use noop_llseek if neither read nor write accesses f_pos
///////////////////////////////////////////////////////////
@ fops4 depends on !fops1 && !fops2 && !fops3 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read_no_fpos.read_f;
identifier write_no_fpos.write_f;
@@
// write fops use offset
struct file_operations fops = {
...
.write = write_f,
.read = read_f,
...
+.llseek = noop_llseek, /* read and write both use no f_pos */
};
@ depends on has_write && !has_read && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier write_no_fpos.write_f;
@@
struct file_operations fops = {
... .write = write_f, ...
+.llseek = noop_llseek, /* write uses no f_pos */
};
@ depends on has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read_no_fpos.read_f;
@@
struct file_operations fops = {
... .read = read_f, ...
+.llseek = noop_llseek, /* read uses no f_pos */
};
@ depends on !has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
@@
struct file_operations fops = {
...
+.llseek = noop_llseek, /* no read or write fn */
};
===== End semantic patch =====
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Cc: Julia Lawall <julia@diku.dk>
Cc: Christoph Hellwig <hch@infradead.org>
2010-08-15 20:52:59 +04:00
|
|
|
.llseek = default_llseek,
|
2005-04-17 02:20:36 +04:00
|
|
|
};
|
|
|
|
|
2012-06-11 01:13:09 +04:00
|
|
|
static struct dentry *proc_tgid_base_lookup(struct inode *dir, struct dentry *dentry, unsigned int flags)
|
|
|
|
{
|
2006-10-02 13:18:56 +04:00
|
|
|
return proc_pident_lookup(dir, dentry,
|
|
|
|
tgid_base_stuff, ARRAY_SIZE(tgid_base_stuff));
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2007-02-12 11:55:40 +03:00
|
|
|
static const struct inode_operations proc_tgid_base_inode_operations = {
|
2006-10-02 13:17:05 +04:00
|
|
|
.lookup = proc_tgid_base_lookup,
|
2006-06-26 11:25:55 +04:00
|
|
|
.getattr = pid_getattr,
|
2006-07-15 23:26:45 +04:00
|
|
|
.setattr = proc_setattr,
|
procfs: add hidepid= and gid= mount options
Add support for mount options to restrict access to /proc/PID/
directories. The default backward-compatible "relaxed" behaviour is left
untouched.
The first mount option is called "hidepid" and its value defines how much
info about processes we want to be available for non-owners:
hidepid=0 (default) means the old behavior - anybody may read all
world-readable /proc/PID/* files.
hidepid=1 means users may not access any /proc/<pid>/ directories, but
their own. Sensitive files like cmdline, sched*, status are now protected
against other users. As permission checking done in proc_pid_permission()
and files' permissions are left untouched, programs expecting specific
files' modes are not confused.
hidepid=2 means hidepid=1 plus all /proc/PID/ will be invisible to other
users. It doesn't mean that it hides whether a process exists (it can be
learned by other means, e.g. by kill -0 $PID), but it hides process' euid
and egid. It compicates intruder's task of gathering info about running
processes, whether some daemon runs with elevated privileges, whether
another user runs some sensitive program, whether other users run any
program at all, etc.
gid=XXX defines a group that will be able to gather all processes' info
(as in hidepid=0 mode). This group should be used instead of putting
nonroot user in sudoers file or something. However, untrusted users (like
daemons, etc.) which are not supposed to monitor the tasks in the whole
system should not be added to the group.
hidepid=1 or higher is designed to restrict access to procfs files, which
might reveal some sensitive private information like precise keystrokes
timings:
http://www.openwall.com/lists/oss-security/2011/11/05/3
hidepid=1/2 doesn't break monitoring userspace tools. ps, top, pgrep, and
conky gracefully handle EPERM/ENOENT and behave as if the current user is
the only user running processes. pstree shows the process subtree which
contains "pstree" process.
Note: the patch doesn't deal with setuid/setgid issues of keeping
preopened descriptors of procfs files (like
https://lkml.org/lkml/2011/2/7/368). We rely on that the leaked
information like the scheduling counters of setuid apps doesn't threaten
anybody's privacy - only the user started the setuid program may read the
counters.
Signed-off-by: Vasiliy Kulikov <segoon@openwall.com>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Randy Dunlap <rdunlap@xenotime.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Greg KH <greg@kroah.com>
Cc: Theodore Tso <tytso@MIT.EDU>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Cc: James Morris <jmorris@namei.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-01-11 03:11:31 +04:00
|
|
|
.permission = proc_pid_permission,
|
2005-04-17 02:20:36 +04:00
|
|
|
};
|
|
|
|
|
2007-10-19 10:40:03 +04:00
|
|
|
static void proc_flush_task_mnt(struct vfsmount *mnt, pid_t pid, pid_t tgid)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2006-06-26 11:25:48 +04:00
|
|
|
struct dentry *dentry, *leader, *dir;
|
2006-06-26 11:25:54 +04:00
|
|
|
char buf[PROC_NUMBUF];
|
2006-06-26 11:25:48 +04:00
|
|
|
struct qstr name;
|
|
|
|
|
|
|
|
name.name = buf;
|
2007-10-19 10:40:03 +04:00
|
|
|
name.len = snprintf(buf, sizeof(buf), "%d", pid);
|
2013-02-12 08:20:37 +04:00
|
|
|
/* no ->d_hash() rejects on procfs */
|
2007-10-19 10:40:03 +04:00
|
|
|
dentry = d_hash_and_lookup(mnt->mnt_root, &name);
|
2006-06-26 11:25:48 +04:00
|
|
|
if (dentry) {
|
2014-02-13 22:24:23 +04:00
|
|
|
d_invalidate(dentry);
|
2006-06-26 11:25:48 +04:00
|
|
|
dput(dentry);
|
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2014-12-11 02:54:56 +03:00
|
|
|
if (pid == tgid)
|
|
|
|
return;
|
|
|
|
|
2006-06-26 11:25:48 +04:00
|
|
|
name.name = buf;
|
2007-10-19 10:40:03 +04:00
|
|
|
name.len = snprintf(buf, sizeof(buf), "%d", tgid);
|
|
|
|
leader = d_hash_and_lookup(mnt->mnt_root, &name);
|
2006-06-26 11:25:48 +04:00
|
|
|
if (!leader)
|
|
|
|
goto out;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-06-26 11:25:48 +04:00
|
|
|
name.name = "task";
|
|
|
|
name.len = strlen(name.name);
|
|
|
|
dir = d_hash_and_lookup(leader, &name);
|
|
|
|
if (!dir)
|
|
|
|
goto out_put_leader;
|
|
|
|
|
|
|
|
name.name = buf;
|
2007-10-19 10:40:03 +04:00
|
|
|
name.len = snprintf(buf, sizeof(buf), "%d", pid);
|
2006-06-26 11:25:48 +04:00
|
|
|
dentry = d_hash_and_lookup(dir, &name);
|
|
|
|
if (dentry) {
|
2014-02-13 22:24:23 +04:00
|
|
|
d_invalidate(dentry);
|
2006-06-26 11:25:48 +04:00
|
|
|
dput(dentry);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
2006-06-26 11:25:48 +04:00
|
|
|
|
|
|
|
dput(dir);
|
|
|
|
out_put_leader:
|
|
|
|
dput(leader);
|
|
|
|
out:
|
|
|
|
return;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2007-10-22 08:00:10 +04:00
|
|
|
/**
|
|
|
|
* proc_flush_task - Remove dcache entries for @task from the /proc dcache.
|
|
|
|
* @task: task that should be flushed.
|
|
|
|
*
|
|
|
|
* When flushing dentries from proc, one needs to flush them from global
|
2007-10-19 10:40:03 +04:00
|
|
|
* proc (proc_mnt) and from all the namespaces' procs this task was seen
|
2007-10-22 08:00:10 +04:00
|
|
|
* in. This call is supposed to do all of this job.
|
|
|
|
*
|
|
|
|
* Looks in the dcache for
|
|
|
|
* /proc/@pid
|
|
|
|
* /proc/@tgid/task/@pid
|
|
|
|
* if either directory is present flushes it and all of it'ts children
|
|
|
|
* from the dcache.
|
|
|
|
*
|
|
|
|
* It is safe and reasonable to cache /proc entries for a task until
|
|
|
|
* that task exits. After that they just clog up the dcache with
|
|
|
|
* useless entries, possibly causing useful dcache entries to be
|
|
|
|
* flushed instead. This routine is proved to flush those useless
|
|
|
|
* dcache entries at process exit time.
|
|
|
|
*
|
|
|
|
* NOTE: This routine is just an optimization so it does not guarantee
|
|
|
|
* that no dcache entries will exist at process exit time it
|
|
|
|
* just makes it very unlikely that any will persist.
|
2007-10-19 10:40:03 +04:00
|
|
|
*/
|
|
|
|
|
|
|
|
void proc_flush_task(struct task_struct *task)
|
|
|
|
{
|
2007-11-15 04:00:07 +03:00
|
|
|
int i;
|
proc_flush_task: flush /proc/tid/task/pid when a sub-thread exits
The exiting sub-thread flushes /proc/pid only, but this doesn't buy too
much: ps and friends mostly use /proc/tid/task/pid.
Remove "if (thread_group_leader())" checks from proc_flush_task() path,
this means we always remove /proc/tid/task/pid dentry on exit, and this
actually matches the comment above proc_flush_task().
The test-case:
static void* tfunc(void *arg)
{
char name[256];
sprintf(name, "/proc/%d/task/%ld/status", getpid(), gettid());
close(open(name, O_RDONLY));
return NULL;
}
int main(void)
{
pthread_t t;
for (;;) {
if (!pthread_create(&t, NULL, &tfunc, NULL))
pthread_join(t, NULL);
}
}
slabtop shows that pid/proc_inode_cache/etc grow quickly and
"indefinitely" until the task is killed or shrink_slab() is called, not
good. And the main thread needs a lot of time to exit.
The same can happen if something like "ps -efL" runs continuously, while
some application spawns short-living threads.
Reported-by: "James M. Leddy" <jleddy@redhat.com>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Dominic Duval <dduval@redhat.com>
Cc: Frank Hirtz <fhirtz@redhat.com>
Cc: "Fuller, Johnray" <Johnray.Fuller@gs.com>
Cc: Larry Woodman <lwoodman@redhat.com>
Cc: Paul Batkowski <pbatkowski@redhat.com>
Cc: Roland McGrath <roland@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-23 03:45:34 +04:00
|
|
|
struct pid *pid, *tgid;
|
2007-10-19 10:40:11 +04:00
|
|
|
struct upid *upid;
|
|
|
|
|
|
|
|
pid = task_pid(task);
|
proc_flush_task: flush /proc/tid/task/pid when a sub-thread exits
The exiting sub-thread flushes /proc/pid only, but this doesn't buy too
much: ps and friends mostly use /proc/tid/task/pid.
Remove "if (thread_group_leader())" checks from proc_flush_task() path,
this means we always remove /proc/tid/task/pid dentry on exit, and this
actually matches the comment above proc_flush_task().
The test-case:
static void* tfunc(void *arg)
{
char name[256];
sprintf(name, "/proc/%d/task/%ld/status", getpid(), gettid());
close(open(name, O_RDONLY));
return NULL;
}
int main(void)
{
pthread_t t;
for (;;) {
if (!pthread_create(&t, NULL, &tfunc, NULL))
pthread_join(t, NULL);
}
}
slabtop shows that pid/proc_inode_cache/etc grow quickly and
"indefinitely" until the task is killed or shrink_slab() is called, not
good. And the main thread needs a lot of time to exit.
The same can happen if something like "ps -efL" runs continuously, while
some application spawns short-living threads.
Reported-by: "James M. Leddy" <jleddy@redhat.com>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Dominic Duval <dduval@redhat.com>
Cc: Frank Hirtz <fhirtz@redhat.com>
Cc: "Fuller, Johnray" <Johnray.Fuller@gs.com>
Cc: Larry Woodman <lwoodman@redhat.com>
Cc: Paul Batkowski <pbatkowski@redhat.com>
Cc: Roland McGrath <roland@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-23 03:45:34 +04:00
|
|
|
tgid = task_tgid(task);
|
2007-10-19 10:40:11 +04:00
|
|
|
|
2007-11-15 04:00:07 +03:00
|
|
|
for (i = 0; i <= pid->level; i++) {
|
2007-10-19 10:40:11 +04:00
|
|
|
upid = &pid->numbers[i];
|
|
|
|
proc_flush_task_mnt(upid->ns->proc_mnt, upid->nr,
|
proc_flush_task: flush /proc/tid/task/pid when a sub-thread exits
The exiting sub-thread flushes /proc/pid only, but this doesn't buy too
much: ps and friends mostly use /proc/tid/task/pid.
Remove "if (thread_group_leader())" checks from proc_flush_task() path,
this means we always remove /proc/tid/task/pid dentry on exit, and this
actually matches the comment above proc_flush_task().
The test-case:
static void* tfunc(void *arg)
{
char name[256];
sprintf(name, "/proc/%d/task/%ld/status", getpid(), gettid());
close(open(name, O_RDONLY));
return NULL;
}
int main(void)
{
pthread_t t;
for (;;) {
if (!pthread_create(&t, NULL, &tfunc, NULL))
pthread_join(t, NULL);
}
}
slabtop shows that pid/proc_inode_cache/etc grow quickly and
"indefinitely" until the task is killed or shrink_slab() is called, not
good. And the main thread needs a lot of time to exit.
The same can happen if something like "ps -efL" runs continuously, while
some application spawns short-living threads.
Reported-by: "James M. Leddy" <jleddy@redhat.com>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Dominic Duval <dduval@redhat.com>
Cc: Frank Hirtz <fhirtz@redhat.com>
Cc: "Fuller, Johnray" <Johnray.Fuller@gs.com>
Cc: Larry Woodman <lwoodman@redhat.com>
Cc: Paul Batkowski <pbatkowski@redhat.com>
Cc: Roland McGrath <roland@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-23 03:45:34 +04:00
|
|
|
tgid->numbers[i].nr);
|
2007-10-19 10:40:11 +04:00
|
|
|
}
|
2007-10-19 10:40:03 +04:00
|
|
|
}
|
|
|
|
|
2013-06-15 11:15:20 +04:00
|
|
|
static int proc_pid_instantiate(struct inode *dir,
|
|
|
|
struct dentry * dentry,
|
|
|
|
struct task_struct *task, const void *ptr)
|
2006-10-02 13:18:49 +04:00
|
|
|
{
|
|
|
|
struct inode *inode;
|
|
|
|
|
2006-10-02 13:18:49 +04:00
|
|
|
inode = proc_pid_make_inode(dir->i_sb, task);
|
2006-10-02 13:18:49 +04:00
|
|
|
if (!inode)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
inode->i_mode = S_IFDIR|S_IRUGO|S_IXUGO;
|
|
|
|
inode->i_op = &proc_tgid_base_inode_operations;
|
|
|
|
inode->i_fop = &proc_tgid_base_operations;
|
|
|
|
inode->i_flags|=S_IMMUTABLE;
|
2008-06-06 09:46:53 +04:00
|
|
|
|
2011-10-28 16:13:29 +04:00
|
|
|
set_nlink(inode, 2 + pid_entry_count_dirs(tgid_base_stuff,
|
|
|
|
ARRAY_SIZE(tgid_base_stuff)));
|
2006-10-02 13:18:49 +04:00
|
|
|
|
2011-01-07 09:49:55 +03:00
|
|
|
d_set_d_op(dentry, &pid_dentry_operations);
|
2006-10-02 13:18:49 +04:00
|
|
|
|
|
|
|
d_add(dentry, inode);
|
|
|
|
/* Close the race of the process dying before we return the dentry */
|
2012-06-11 00:03:43 +04:00
|
|
|
if (pid_revalidate(dentry, 0))
|
2013-06-15 11:15:20 +04:00
|
|
|
return 0;
|
2006-10-02 13:18:49 +04:00
|
|
|
out:
|
2013-06-15 11:15:20 +04:00
|
|
|
return -ENOENT;
|
2006-10-02 13:18:49 +04:00
|
|
|
}
|
|
|
|
|
2012-06-11 01:13:09 +04:00
|
|
|
struct dentry *proc_pid_lookup(struct inode *dir, struct dentry * dentry, unsigned int flags)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2014-08-09 01:21:27 +04:00
|
|
|
int result = -ENOENT;
|
2005-04-17 02:20:36 +04:00
|
|
|
struct task_struct *task;
|
|
|
|
unsigned tgid;
|
2007-10-19 10:40:14 +04:00
|
|
|
struct pid_namespace *ns;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2014-08-09 01:21:25 +04:00
|
|
|
tgid = name_to_int(&dentry->d_name);
|
2005-04-17 02:20:36 +04:00
|
|
|
if (tgid == ~0U)
|
|
|
|
goto out;
|
|
|
|
|
2007-10-19 10:40:14 +04:00
|
|
|
ns = dentry->d_sb->s_fs_info;
|
2006-06-26 11:25:51 +04:00
|
|
|
rcu_read_lock();
|
2007-10-19 10:40:14 +04:00
|
|
|
task = find_task_by_pid_ns(tgid, ns);
|
2005-04-17 02:20:36 +04:00
|
|
|
if (task)
|
|
|
|
get_task_struct(task);
|
2006-06-26 11:25:51 +04:00
|
|
|
rcu_read_unlock();
|
2005-04-17 02:20:36 +04:00
|
|
|
if (!task)
|
|
|
|
goto out;
|
|
|
|
|
2006-10-02 13:18:49 +04:00
|
|
|
result = proc_pid_instantiate(dir, dentry, task, NULL);
|
2005-04-17 02:20:36 +04:00
|
|
|
put_task_struct(task);
|
|
|
|
out:
|
2013-06-15 11:15:20 +04:00
|
|
|
return ERR_PTR(result);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
[PATCH] proc: readdir race fix (take 3)
The problem: An opendir, readdir, closedir sequence can fail to report
process ids that are continually in use throughout the sequence of system
calls. For this race to trigger the process that proc_pid_readdir stops at
must exit before readdir is called again.
This can cause ps to fail to report processes, and it is in violation of
posix guarantees and normal application expectations with respect to
readdir.
Currently there is no way to work around this problem in user space short
of providing a gargantuan buffer to user space so the directory read all
happens in on system call.
This patch implements the normal directory semantics for proc, that
guarantee that a directory entry that is neither created nor destroyed
while reading the directory entry will be returned. For directory that are
either created or destroyed during the readdir you may or may not see them.
Furthermore you may seek to a directory offset you have previously seen.
These are the guarantee that ext[23] provides and that posix requires, and
more importantly that user space expects. Plus it is a simple semantic to
implement reliable service. It is just a matter of calling readdir a
second time if you are wondering if something new has show up.
These better semantics are implemented by scanning through the pids in
numerical order and by making the file offset a pid plus a fixed offset.
The pid scan happens on the pid bitmap, which when you look at it is
remarkably efficient for a brute force algorithm. Given that a typical
cache line is 64 bytes and thus covers space for 64*8 == 200 pids. There
are only 40 cache lines for the entire 32K pid space. A typical system
will have 100 pids or more so this is actually fewer cache lines we have to
look at to scan a linked list, and the worst case of having to scan the
entire pid bitmap is pretty reasonable.
If we need something more efficient we can go to a more efficient data
structure for indexing the pids, but for now what we have should be
sufficient.
In addition this takes no additional locks and is actually less code than
what we are doing now.
Also another very subtle bug in this area has been fixed. It is possible
to catch a task in the middle of de_thread where a thread is assuming the
thread of it's thread group leader. This patch carefully handles that case
so if we hit it we don't fail to return the pid, that is undergoing the
de_thread dance.
Thanks to KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> for
providing the first fix, pointing this out and working on it.
[oleg@tv-sign.ru: fix it]
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru>
Cc: Jean Delvare <jdelvare@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-10-02 13:17:04 +04:00
|
|
|
* Find the first task with tgid >= tgid
|
2006-06-26 11:25:50 +04:00
|
|
|
*
|
2005-04-17 02:20:36 +04:00
|
|
|
*/
|
2007-11-29 03:21:26 +03:00
|
|
|
struct tgid_iter {
|
|
|
|
unsigned int tgid;
|
[PATCH] proc: readdir race fix (take 3)
The problem: An opendir, readdir, closedir sequence can fail to report
process ids that are continually in use throughout the sequence of system
calls. For this race to trigger the process that proc_pid_readdir stops at
must exit before readdir is called again.
This can cause ps to fail to report processes, and it is in violation of
posix guarantees and normal application expectations with respect to
readdir.
Currently there is no way to work around this problem in user space short
of providing a gargantuan buffer to user space so the directory read all
happens in on system call.
This patch implements the normal directory semantics for proc, that
guarantee that a directory entry that is neither created nor destroyed
while reading the directory entry will be returned. For directory that are
either created or destroyed during the readdir you may or may not see them.
Furthermore you may seek to a directory offset you have previously seen.
These are the guarantee that ext[23] provides and that posix requires, and
more importantly that user space expects. Plus it is a simple semantic to
implement reliable service. It is just a matter of calling readdir a
second time if you are wondering if something new has show up.
These better semantics are implemented by scanning through the pids in
numerical order and by making the file offset a pid plus a fixed offset.
The pid scan happens on the pid bitmap, which when you look at it is
remarkably efficient for a brute force algorithm. Given that a typical
cache line is 64 bytes and thus covers space for 64*8 == 200 pids. There
are only 40 cache lines for the entire 32K pid space. A typical system
will have 100 pids or more so this is actually fewer cache lines we have to
look at to scan a linked list, and the worst case of having to scan the
entire pid bitmap is pretty reasonable.
If we need something more efficient we can go to a more efficient data
structure for indexing the pids, but for now what we have should be
sufficient.
In addition this takes no additional locks and is actually less code than
what we are doing now.
Also another very subtle bug in this area has been fixed. It is possible
to catch a task in the middle of de_thread where a thread is assuming the
thread of it's thread group leader. This patch carefully handles that case
so if we hit it we don't fail to return the pid, that is undergoing the
de_thread dance.
Thanks to KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> for
providing the first fix, pointing this out and working on it.
[oleg@tv-sign.ru: fix it]
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru>
Cc: Jean Delvare <jdelvare@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-10-02 13:17:04 +04:00
|
|
|
struct task_struct *task;
|
2007-11-29 03:21:26 +03:00
|
|
|
};
|
|
|
|
static struct tgid_iter next_tgid(struct pid_namespace *ns, struct tgid_iter iter)
|
|
|
|
{
|
[PATCH] proc: readdir race fix (take 3)
The problem: An opendir, readdir, closedir sequence can fail to report
process ids that are continually in use throughout the sequence of system
calls. For this race to trigger the process that proc_pid_readdir stops at
must exit before readdir is called again.
This can cause ps to fail to report processes, and it is in violation of
posix guarantees and normal application expectations with respect to
readdir.
Currently there is no way to work around this problem in user space short
of providing a gargantuan buffer to user space so the directory read all
happens in on system call.
This patch implements the normal directory semantics for proc, that
guarantee that a directory entry that is neither created nor destroyed
while reading the directory entry will be returned. For directory that are
either created or destroyed during the readdir you may or may not see them.
Furthermore you may seek to a directory offset you have previously seen.
These are the guarantee that ext[23] provides and that posix requires, and
more importantly that user space expects. Plus it is a simple semantic to
implement reliable service. It is just a matter of calling readdir a
second time if you are wondering if something new has show up.
These better semantics are implemented by scanning through the pids in
numerical order and by making the file offset a pid plus a fixed offset.
The pid scan happens on the pid bitmap, which when you look at it is
remarkably efficient for a brute force algorithm. Given that a typical
cache line is 64 bytes and thus covers space for 64*8 == 200 pids. There
are only 40 cache lines for the entire 32K pid space. A typical system
will have 100 pids or more so this is actually fewer cache lines we have to
look at to scan a linked list, and the worst case of having to scan the
entire pid bitmap is pretty reasonable.
If we need something more efficient we can go to a more efficient data
structure for indexing the pids, but for now what we have should be
sufficient.
In addition this takes no additional locks and is actually less code than
what we are doing now.
Also another very subtle bug in this area has been fixed. It is possible
to catch a task in the middle of de_thread where a thread is assuming the
thread of it's thread group leader. This patch carefully handles that case
so if we hit it we don't fail to return the pid, that is undergoing the
de_thread dance.
Thanks to KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> for
providing the first fix, pointing this out and working on it.
[oleg@tv-sign.ru: fix it]
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru>
Cc: Jean Delvare <jdelvare@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-10-02 13:17:04 +04:00
|
|
|
struct pid *pid;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2007-11-29 03:21:26 +03:00
|
|
|
if (iter.task)
|
|
|
|
put_task_struct(iter.task);
|
2006-06-26 11:25:51 +04:00
|
|
|
rcu_read_lock();
|
[PATCH] proc: readdir race fix (take 3)
The problem: An opendir, readdir, closedir sequence can fail to report
process ids that are continually in use throughout the sequence of system
calls. For this race to trigger the process that proc_pid_readdir stops at
must exit before readdir is called again.
This can cause ps to fail to report processes, and it is in violation of
posix guarantees and normal application expectations with respect to
readdir.
Currently there is no way to work around this problem in user space short
of providing a gargantuan buffer to user space so the directory read all
happens in on system call.
This patch implements the normal directory semantics for proc, that
guarantee that a directory entry that is neither created nor destroyed
while reading the directory entry will be returned. For directory that are
either created or destroyed during the readdir you may or may not see them.
Furthermore you may seek to a directory offset you have previously seen.
These are the guarantee that ext[23] provides and that posix requires, and
more importantly that user space expects. Plus it is a simple semantic to
implement reliable service. It is just a matter of calling readdir a
second time if you are wondering if something new has show up.
These better semantics are implemented by scanning through the pids in
numerical order and by making the file offset a pid plus a fixed offset.
The pid scan happens on the pid bitmap, which when you look at it is
remarkably efficient for a brute force algorithm. Given that a typical
cache line is 64 bytes and thus covers space for 64*8 == 200 pids. There
are only 40 cache lines for the entire 32K pid space. A typical system
will have 100 pids or more so this is actually fewer cache lines we have to
look at to scan a linked list, and the worst case of having to scan the
entire pid bitmap is pretty reasonable.
If we need something more efficient we can go to a more efficient data
structure for indexing the pids, but for now what we have should be
sufficient.
In addition this takes no additional locks and is actually less code than
what we are doing now.
Also another very subtle bug in this area has been fixed. It is possible
to catch a task in the middle of de_thread where a thread is assuming the
thread of it's thread group leader. This patch carefully handles that case
so if we hit it we don't fail to return the pid, that is undergoing the
de_thread dance.
Thanks to KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> for
providing the first fix, pointing this out and working on it.
[oleg@tv-sign.ru: fix it]
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru>
Cc: Jean Delvare <jdelvare@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-10-02 13:17:04 +04:00
|
|
|
retry:
|
2007-11-29 03:21:26 +03:00
|
|
|
iter.task = NULL;
|
|
|
|
pid = find_ge_pid(iter.tgid, ns);
|
[PATCH] proc: readdir race fix (take 3)
The problem: An opendir, readdir, closedir sequence can fail to report
process ids that are continually in use throughout the sequence of system
calls. For this race to trigger the process that proc_pid_readdir stops at
must exit before readdir is called again.
This can cause ps to fail to report processes, and it is in violation of
posix guarantees and normal application expectations with respect to
readdir.
Currently there is no way to work around this problem in user space short
of providing a gargantuan buffer to user space so the directory read all
happens in on system call.
This patch implements the normal directory semantics for proc, that
guarantee that a directory entry that is neither created nor destroyed
while reading the directory entry will be returned. For directory that are
either created or destroyed during the readdir you may or may not see them.
Furthermore you may seek to a directory offset you have previously seen.
These are the guarantee that ext[23] provides and that posix requires, and
more importantly that user space expects. Plus it is a simple semantic to
implement reliable service. It is just a matter of calling readdir a
second time if you are wondering if something new has show up.
These better semantics are implemented by scanning through the pids in
numerical order and by making the file offset a pid plus a fixed offset.
The pid scan happens on the pid bitmap, which when you look at it is
remarkably efficient for a brute force algorithm. Given that a typical
cache line is 64 bytes and thus covers space for 64*8 == 200 pids. There
are only 40 cache lines for the entire 32K pid space. A typical system
will have 100 pids or more so this is actually fewer cache lines we have to
look at to scan a linked list, and the worst case of having to scan the
entire pid bitmap is pretty reasonable.
If we need something more efficient we can go to a more efficient data
structure for indexing the pids, but for now what we have should be
sufficient.
In addition this takes no additional locks and is actually less code than
what we are doing now.
Also another very subtle bug in this area has been fixed. It is possible
to catch a task in the middle of de_thread where a thread is assuming the
thread of it's thread group leader. This patch carefully handles that case
so if we hit it we don't fail to return the pid, that is undergoing the
de_thread dance.
Thanks to KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> for
providing the first fix, pointing this out and working on it.
[oleg@tv-sign.ru: fix it]
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru>
Cc: Jean Delvare <jdelvare@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-10-02 13:17:04 +04:00
|
|
|
if (pid) {
|
2007-11-29 03:21:26 +03:00
|
|
|
iter.tgid = pid_nr_ns(pid, ns);
|
|
|
|
iter.task = pid_task(pid, PIDTYPE_PID);
|
[PATCH] proc: readdir race fix (take 3)
The problem: An opendir, readdir, closedir sequence can fail to report
process ids that are continually in use throughout the sequence of system
calls. For this race to trigger the process that proc_pid_readdir stops at
must exit before readdir is called again.
This can cause ps to fail to report processes, and it is in violation of
posix guarantees and normal application expectations with respect to
readdir.
Currently there is no way to work around this problem in user space short
of providing a gargantuan buffer to user space so the directory read all
happens in on system call.
This patch implements the normal directory semantics for proc, that
guarantee that a directory entry that is neither created nor destroyed
while reading the directory entry will be returned. For directory that are
either created or destroyed during the readdir you may or may not see them.
Furthermore you may seek to a directory offset you have previously seen.
These are the guarantee that ext[23] provides and that posix requires, and
more importantly that user space expects. Plus it is a simple semantic to
implement reliable service. It is just a matter of calling readdir a
second time if you are wondering if something new has show up.
These better semantics are implemented by scanning through the pids in
numerical order and by making the file offset a pid plus a fixed offset.
The pid scan happens on the pid bitmap, which when you look at it is
remarkably efficient for a brute force algorithm. Given that a typical
cache line is 64 bytes and thus covers space for 64*8 == 200 pids. There
are only 40 cache lines for the entire 32K pid space. A typical system
will have 100 pids or more so this is actually fewer cache lines we have to
look at to scan a linked list, and the worst case of having to scan the
entire pid bitmap is pretty reasonable.
If we need something more efficient we can go to a more efficient data
structure for indexing the pids, but for now what we have should be
sufficient.
In addition this takes no additional locks and is actually less code than
what we are doing now.
Also another very subtle bug in this area has been fixed. It is possible
to catch a task in the middle of de_thread where a thread is assuming the
thread of it's thread group leader. This patch carefully handles that case
so if we hit it we don't fail to return the pid, that is undergoing the
de_thread dance.
Thanks to KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> for
providing the first fix, pointing this out and working on it.
[oleg@tv-sign.ru: fix it]
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru>
Cc: Jean Delvare <jdelvare@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-10-02 13:17:04 +04:00
|
|
|
/* What we to know is if the pid we have find is the
|
|
|
|
* pid of a thread_group_leader. Testing for task
|
|
|
|
* being a thread_group_leader is the obvious thing
|
|
|
|
* todo but there is a window when it fails, due to
|
|
|
|
* the pid transfer logic in de_thread.
|
|
|
|
*
|
|
|
|
* So we perform the straight forward test of seeing
|
|
|
|
* if the pid we have found is the pid of a thread
|
|
|
|
* group leader, and don't worry if the task we have
|
|
|
|
* found doesn't happen to be a thread group leader.
|
|
|
|
* As we don't care in the case of readdir.
|
|
|
|
*/
|
2007-11-29 03:21:26 +03:00
|
|
|
if (!iter.task || !has_group_leader_pid(iter.task)) {
|
|
|
|
iter.tgid += 1;
|
[PATCH] proc: readdir race fix (take 3)
The problem: An opendir, readdir, closedir sequence can fail to report
process ids that are continually in use throughout the sequence of system
calls. For this race to trigger the process that proc_pid_readdir stops at
must exit before readdir is called again.
This can cause ps to fail to report processes, and it is in violation of
posix guarantees and normal application expectations with respect to
readdir.
Currently there is no way to work around this problem in user space short
of providing a gargantuan buffer to user space so the directory read all
happens in on system call.
This patch implements the normal directory semantics for proc, that
guarantee that a directory entry that is neither created nor destroyed
while reading the directory entry will be returned. For directory that are
either created or destroyed during the readdir you may or may not see them.
Furthermore you may seek to a directory offset you have previously seen.
These are the guarantee that ext[23] provides and that posix requires, and
more importantly that user space expects. Plus it is a simple semantic to
implement reliable service. It is just a matter of calling readdir a
second time if you are wondering if something new has show up.
These better semantics are implemented by scanning through the pids in
numerical order and by making the file offset a pid plus a fixed offset.
The pid scan happens on the pid bitmap, which when you look at it is
remarkably efficient for a brute force algorithm. Given that a typical
cache line is 64 bytes and thus covers space for 64*8 == 200 pids. There
are only 40 cache lines for the entire 32K pid space. A typical system
will have 100 pids or more so this is actually fewer cache lines we have to
look at to scan a linked list, and the worst case of having to scan the
entire pid bitmap is pretty reasonable.
If we need something more efficient we can go to a more efficient data
structure for indexing the pids, but for now what we have should be
sufficient.
In addition this takes no additional locks and is actually less code than
what we are doing now.
Also another very subtle bug in this area has been fixed. It is possible
to catch a task in the middle of de_thread where a thread is assuming the
thread of it's thread group leader. This patch carefully handles that case
so if we hit it we don't fail to return the pid, that is undergoing the
de_thread dance.
Thanks to KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> for
providing the first fix, pointing this out and working on it.
[oleg@tv-sign.ru: fix it]
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru>
Cc: Jean Delvare <jdelvare@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-10-02 13:17:04 +04:00
|
|
|
goto retry;
|
2007-11-29 03:21:26 +03:00
|
|
|
}
|
|
|
|
get_task_struct(iter.task);
|
2006-06-26 11:25:50 +04:00
|
|
|
}
|
2006-06-26 11:25:51 +04:00
|
|
|
rcu_read_unlock();
|
2007-11-29 03:21:26 +03:00
|
|
|
return iter;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2014-07-31 14:10:50 +04:00
|
|
|
#define TGID_OFFSET (FIRST_PROCESS_ENTRY + 2)
|
[PATCH] proc: readdir race fix (take 3)
The problem: An opendir, readdir, closedir sequence can fail to report
process ids that are continually in use throughout the sequence of system
calls. For this race to trigger the process that proc_pid_readdir stops at
must exit before readdir is called again.
This can cause ps to fail to report processes, and it is in violation of
posix guarantees and normal application expectations with respect to
readdir.
Currently there is no way to work around this problem in user space short
of providing a gargantuan buffer to user space so the directory read all
happens in on system call.
This patch implements the normal directory semantics for proc, that
guarantee that a directory entry that is neither created nor destroyed
while reading the directory entry will be returned. For directory that are
either created or destroyed during the readdir you may or may not see them.
Furthermore you may seek to a directory offset you have previously seen.
These are the guarantee that ext[23] provides and that posix requires, and
more importantly that user space expects. Plus it is a simple semantic to
implement reliable service. It is just a matter of calling readdir a
second time if you are wondering if something new has show up.
These better semantics are implemented by scanning through the pids in
numerical order and by making the file offset a pid plus a fixed offset.
The pid scan happens on the pid bitmap, which when you look at it is
remarkably efficient for a brute force algorithm. Given that a typical
cache line is 64 bytes and thus covers space for 64*8 == 200 pids. There
are only 40 cache lines for the entire 32K pid space. A typical system
will have 100 pids or more so this is actually fewer cache lines we have to
look at to scan a linked list, and the worst case of having to scan the
entire pid bitmap is pretty reasonable.
If we need something more efficient we can go to a more efficient data
structure for indexing the pids, but for now what we have should be
sufficient.
In addition this takes no additional locks and is actually less code than
what we are doing now.
Also another very subtle bug in this area has been fixed. It is possible
to catch a task in the middle of de_thread where a thread is assuming the
thread of it's thread group leader. This patch carefully handles that case
so if we hit it we don't fail to return the pid, that is undergoing the
de_thread dance.
Thanks to KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> for
providing the first fix, pointing this out and working on it.
[oleg@tv-sign.ru: fix it]
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru>
Cc: Jean Delvare <jdelvare@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-10-02 13:17:04 +04:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
/* for the /proc/ directory itself, after non-process stuff has been done */
|
2013-05-16 20:07:31 +04:00
|
|
|
int proc_pid_readdir(struct file *file, struct dir_context *ctx)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2007-11-29 03:21:26 +03:00
|
|
|
struct tgid_iter iter;
|
2014-10-31 07:42:35 +03:00
|
|
|
struct pid_namespace *ns = file_inode(file)->i_sb->s_fs_info;
|
2013-05-16 20:07:31 +04:00
|
|
|
loff_t pos = ctx->pos;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2013-03-30 03:27:05 +04:00
|
|
|
if (pos >= PID_MAX_LIMIT + TGID_OFFSET)
|
2013-05-16 20:07:31 +04:00
|
|
|
return 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2014-07-31 14:10:50 +04:00
|
|
|
if (pos == TGID_OFFSET - 2) {
|
2013-06-15 10:45:10 +04:00
|
|
|
struct inode *inode = ns->proc_self->d_inode;
|
|
|
|
if (!dir_emit(ctx, "self", 4, inode->i_ino, DT_LNK))
|
2013-05-16 20:07:31 +04:00
|
|
|
return 0;
|
2014-07-31 14:10:50 +04:00
|
|
|
ctx->pos = pos = pos + 1;
|
|
|
|
}
|
|
|
|
if (pos == TGID_OFFSET - 1) {
|
|
|
|
struct inode *inode = ns->proc_thread_self->d_inode;
|
|
|
|
if (!dir_emit(ctx, "thread-self", 11, inode->i_ino, DT_LNK))
|
|
|
|
return 0;
|
|
|
|
ctx->pos = pos = pos + 1;
|
2013-03-30 03:27:05 +04:00
|
|
|
}
|
2014-07-31 14:10:50 +04:00
|
|
|
iter.tgid = pos - TGID_OFFSET;
|
2007-11-29 03:21:26 +03:00
|
|
|
iter.task = NULL;
|
|
|
|
for (iter = next_tgid(ns, iter);
|
|
|
|
iter.task;
|
|
|
|
iter.tgid += 1, iter = next_tgid(ns, iter)) {
|
2013-05-16 20:07:31 +04:00
|
|
|
char name[PROC_NUMBUF];
|
|
|
|
int len;
|
|
|
|
if (!has_pid_permissions(ns, iter.task, 2))
|
|
|
|
continue;
|
procfs: add hidepid= and gid= mount options
Add support for mount options to restrict access to /proc/PID/
directories. The default backward-compatible "relaxed" behaviour is left
untouched.
The first mount option is called "hidepid" and its value defines how much
info about processes we want to be available for non-owners:
hidepid=0 (default) means the old behavior - anybody may read all
world-readable /proc/PID/* files.
hidepid=1 means users may not access any /proc/<pid>/ directories, but
their own. Sensitive files like cmdline, sched*, status are now protected
against other users. As permission checking done in proc_pid_permission()
and files' permissions are left untouched, programs expecting specific
files' modes are not confused.
hidepid=2 means hidepid=1 plus all /proc/PID/ will be invisible to other
users. It doesn't mean that it hides whether a process exists (it can be
learned by other means, e.g. by kill -0 $PID), but it hides process' euid
and egid. It compicates intruder's task of gathering info about running
processes, whether some daemon runs with elevated privileges, whether
another user runs some sensitive program, whether other users run any
program at all, etc.
gid=XXX defines a group that will be able to gather all processes' info
(as in hidepid=0 mode). This group should be used instead of putting
nonroot user in sudoers file or something. However, untrusted users (like
daemons, etc.) which are not supposed to monitor the tasks in the whole
system should not be added to the group.
hidepid=1 or higher is designed to restrict access to procfs files, which
might reveal some sensitive private information like precise keystrokes
timings:
http://www.openwall.com/lists/oss-security/2011/11/05/3
hidepid=1/2 doesn't break monitoring userspace tools. ps, top, pgrep, and
conky gracefully handle EPERM/ENOENT and behave as if the current user is
the only user running processes. pstree shows the process subtree which
contains "pstree" process.
Note: the patch doesn't deal with setuid/setgid issues of keeping
preopened descriptors of procfs files (like
https://lkml.org/lkml/2011/2/7/368). We rely on that the leaked
information like the scheduling counters of setuid apps doesn't threaten
anybody's privacy - only the user started the setuid program may read the
counters.
Signed-off-by: Vasiliy Kulikov <segoon@openwall.com>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Randy Dunlap <rdunlap@xenotime.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Greg KH <greg@kroah.com>
Cc: Theodore Tso <tytso@MIT.EDU>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Cc: James Morris <jmorris@namei.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-01-11 03:11:31 +04:00
|
|
|
|
2013-05-16 20:07:31 +04:00
|
|
|
len = snprintf(name, sizeof(name), "%d", iter.tgid);
|
|
|
|
ctx->pos = iter.tgid + TGID_OFFSET;
|
|
|
|
if (!proc_fill_cache(file, ctx, name, len,
|
|
|
|
proc_pid_instantiate, iter.task, NULL)) {
|
2007-11-29 03:21:26 +03:00
|
|
|
put_task_struct(iter.task);
|
2013-05-16 20:07:31 +04:00
|
|
|
return 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
2006-06-26 11:25:50 +04:00
|
|
|
}
|
2013-05-16 20:07:31 +04:00
|
|
|
ctx->pos = PID_MAX_LIMIT + TGID_OFFSET;
|
2006-06-26 11:25:50 +04:00
|
|
|
return 0;
|
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-10-02 13:17:05 +04:00
|
|
|
/*
|
|
|
|
* Tasks
|
|
|
|
*/
|
2007-05-08 11:26:15 +04:00
|
|
|
static const struct pid_entry tid_base_stuff[] = {
|
2008-11-10 01:32:52 +03:00
|
|
|
DIR("fd", S_IRUSR|S_IXUSR, proc_fd_inode_operations, proc_fd_operations),
|
2010-04-28 00:13:06 +04:00
|
|
|
DIR("fdinfo", S_IRUSR|S_IXUSR, proc_fdinfo_inode_operations, proc_fdinfo_operations),
|
2010-03-08 03:41:34 +03:00
|
|
|
DIR("ns", S_IRUSR|S_IXUGO, proc_ns_dir_inode_operations, proc_ns_dir_operations),
|
2014-08-01 03:27:08 +04:00
|
|
|
#ifdef CONFIG_NET
|
|
|
|
DIR("net", S_IRUGO|S_IXUGO, proc_net_inode_operations, proc_net_operations),
|
|
|
|
#endif
|
2008-11-10 01:32:52 +03:00
|
|
|
REG("environ", S_IRUSR, proc_environ_operations),
|
2014-08-09 01:21:35 +04:00
|
|
|
ONE("auxv", S_IRUSR, proc_pid_auxv),
|
2008-11-10 01:32:52 +03:00
|
|
|
ONE("status", S_IRUGO, proc_pid_status),
|
2014-04-08 02:38:36 +04:00
|
|
|
ONE("personality", S_IRUSR, proc_pid_personality),
|
2014-08-09 01:21:37 +04:00
|
|
|
ONE("limits", S_IRUGO, proc_pid_limits),
|
2007-07-09 20:52:00 +04:00
|
|
|
#ifdef CONFIG_SCHED_DEBUG
|
2008-11-10 01:32:52 +03:00
|
|
|
REG("sched", S_IRUGO|S_IWUSR, proc_pid_sched_operations),
|
2008-07-26 06:46:00 +04:00
|
|
|
#endif
|
2009-12-15 05:00:05 +03:00
|
|
|
REG("comm", S_IRUGO|S_IWUSR, proc_pid_set_comm_operations),
|
2008-07-26 06:46:00 +04:00
|
|
|
#ifdef CONFIG_HAVE_ARCH_TRACEHOOK
|
2014-08-09 01:21:39 +04:00
|
|
|
ONE("syscall", S_IRUSR, proc_pid_syscall),
|
2007-07-09 20:52:00 +04:00
|
|
|
#endif
|
2014-08-09 01:21:41 +04:00
|
|
|
ONE("cmdline", S_IRUGO, proc_pid_cmdline),
|
2008-11-10 01:32:52 +03:00
|
|
|
ONE("stat", S_IRUGO, proc_tid_stat),
|
|
|
|
ONE("statm", S_IRUGO, proc_pid_statm),
|
procfs: mark thread stack correctly in proc/<pid>/maps
Stack for a new thread is mapped by userspace code and passed via
sys_clone. This memory is currently seen as anonymous in
/proc/<pid>/maps, which makes it difficult to ascertain which mappings
are being used for thread stacks. This patch uses the individual task
stack pointers to determine which vmas are actually thread stacks.
For a multithreaded program like the following:
#include <pthread.h>
void *thread_main(void *foo)
{
while(1);
}
int main()
{
pthread_t t;
pthread_create(&t, NULL, thread_main, NULL);
pthread_join(t, NULL);
}
proc/PID/maps looks like the following:
00400000-00401000 r-xp 00000000 fd:0a 3671804 /home/siddhesh/a.out
00600000-00601000 rw-p 00000000 fd:0a 3671804 /home/siddhesh/a.out
019ef000-01a10000 rw-p 00000000 00:00 0 [heap]
7f8a44491000-7f8a44492000 ---p 00000000 00:00 0
7f8a44492000-7f8a44c92000 rw-p 00000000 00:00 0
7f8a44c92000-7f8a44e3d000 r-xp 00000000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a44e3d000-7f8a4503d000 ---p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a4503d000-7f8a45041000 r--p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45041000-7f8a45043000 rw-p 001af000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45043000-7f8a45048000 rw-p 00000000 00:00 0
7f8a45048000-7f8a4505f000 r-xp 00000000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4505f000-7f8a4525e000 ---p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525e000-7f8a4525f000 r--p 00016000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525f000-7f8a45260000 rw-p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a45260000-7f8a45264000 rw-p 00000000 00:00 0
7f8a45264000-7f8a45286000 r-xp 00000000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45457000-7f8a4545a000 rw-p 00000000 00:00 0
7f8a45484000-7f8a45485000 rw-p 00000000 00:00 0
7f8a45485000-7f8a45486000 r--p 00021000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45486000-7f8a45487000 rw-p 00022000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45487000-7f8a45488000 rw-p 00000000 00:00 0
7fff6273b000-7fff6275c000 rw-p 00000000 00:00 0 [stack]
7fff627ff000-7fff62800000 r-xp 00000000 00:00 0 [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]
Here, one could guess that 7f8a44492000-7f8a44c92000 is a stack since
the earlier vma that has no permissions (7f8a44e3d000-7f8a4503d000) but
that is not always a reliable way to find out which vma is a thread
stack. Also, /proc/PID/maps and /proc/PID/task/TID/maps has the same
content.
With this patch in place, /proc/PID/task/TID/maps are treated as 'maps
as the task would see it' and hence, only the vma that that task uses as
stack is marked as [stack]. All other 'stack' vmas are marked as
anonymous memory. /proc/PID/maps acts as a thread group level view,
where all thread stack vmas are marked as [stack:TID] where TID is the
process ID of the task that uses that vma as stack, while the process
stack is marked as [stack].
So /proc/PID/maps will look like this:
00400000-00401000 r-xp 00000000 fd:0a 3671804 /home/siddhesh/a.out
00600000-00601000 rw-p 00000000 fd:0a 3671804 /home/siddhesh/a.out
019ef000-01a10000 rw-p 00000000 00:00 0 [heap]
7f8a44491000-7f8a44492000 ---p 00000000 00:00 0
7f8a44492000-7f8a44c92000 rw-p 00000000 00:00 0 [stack:1442]
7f8a44c92000-7f8a44e3d000 r-xp 00000000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a44e3d000-7f8a4503d000 ---p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a4503d000-7f8a45041000 r--p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45041000-7f8a45043000 rw-p 001af000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45043000-7f8a45048000 rw-p 00000000 00:00 0
7f8a45048000-7f8a4505f000 r-xp 00000000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4505f000-7f8a4525e000 ---p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525e000-7f8a4525f000 r--p 00016000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525f000-7f8a45260000 rw-p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a45260000-7f8a45264000 rw-p 00000000 00:00 0
7f8a45264000-7f8a45286000 r-xp 00000000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45457000-7f8a4545a000 rw-p 00000000 00:00 0
7f8a45484000-7f8a45485000 rw-p 00000000 00:00 0
7f8a45485000-7f8a45486000 r--p 00021000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45486000-7f8a45487000 rw-p 00022000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45487000-7f8a45488000 rw-p 00000000 00:00 0
7fff6273b000-7fff6275c000 rw-p 00000000 00:00 0 [stack]
7fff627ff000-7fff62800000 r-xp 00000000 00:00 0 [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]
Thus marking all vmas that are used as stacks by the threads in the
thread group along with the process stack. The task level maps will
however like this:
00400000-00401000 r-xp 00000000 fd:0a 3671804 /home/siddhesh/a.out
00600000-00601000 rw-p 00000000 fd:0a 3671804 /home/siddhesh/a.out
019ef000-01a10000 rw-p 00000000 00:00 0 [heap]
7f8a44491000-7f8a44492000 ---p 00000000 00:00 0
7f8a44492000-7f8a44c92000 rw-p 00000000 00:00 0 [stack]
7f8a44c92000-7f8a44e3d000 r-xp 00000000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a44e3d000-7f8a4503d000 ---p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a4503d000-7f8a45041000 r--p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45041000-7f8a45043000 rw-p 001af000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45043000-7f8a45048000 rw-p 00000000 00:00 0
7f8a45048000-7f8a4505f000 r-xp 00000000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4505f000-7f8a4525e000 ---p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525e000-7f8a4525f000 r--p 00016000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525f000-7f8a45260000 rw-p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a45260000-7f8a45264000 rw-p 00000000 00:00 0
7f8a45264000-7f8a45286000 r-xp 00000000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45457000-7f8a4545a000 rw-p 00000000 00:00 0
7f8a45484000-7f8a45485000 rw-p 00000000 00:00 0
7f8a45485000-7f8a45486000 r--p 00021000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45486000-7f8a45487000 rw-p 00022000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45487000-7f8a45488000 rw-p 00000000 00:00 0
7fff6273b000-7fff6275c000 rw-p 00000000 00:00 0
7fff627ff000-7fff62800000 r-xp 00000000 00:00 0 [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]
where only the vma that is being used as a stack by *that* task is
marked as [stack].
Analogous changes have been made to /proc/PID/smaps,
/proc/PID/numa_maps, /proc/PID/task/TID/smaps and
/proc/PID/task/TID/numa_maps. Relevant snippets from smaps and
numa_maps:
[siddhesh@localhost ~ ]$ pgrep a.out
1441
[siddhesh@localhost ~ ]$ cat /proc/1441/smaps | grep "\[stack"
7f8a44492000-7f8a44c92000 rw-p 00000000 00:00 0 [stack:1442]
7fff6273b000-7fff6275c000 rw-p 00000000 00:00 0 [stack]
[siddhesh@localhost ~ ]$ cat /proc/1441/task/1442/smaps | grep "\[stack"
7f8a44492000-7f8a44c92000 rw-p 00000000 00:00 0 [stack]
[siddhesh@localhost ~ ]$ cat /proc/1441/task/1441/smaps | grep "\[stack"
7fff6273b000-7fff6275c000 rw-p 00000000 00:00 0 [stack]
[siddhesh@localhost ~ ]$ cat /proc/1441/numa_maps | grep "stack"
7f8a44492000 default stack:1442 anon=2 dirty=2 N0=2
7fff6273a000 default stack anon=3 dirty=3 N0=3
[siddhesh@localhost ~ ]$ cat /proc/1441/task/1442/numa_maps | grep "stack"
7f8a44492000 default stack anon=2 dirty=2 N0=2
[siddhesh@localhost ~ ]$ cat /proc/1441/task/1441/numa_maps | grep "stack"
7fff6273a000 default stack anon=3 dirty=3 N0=3
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix build]
Signed-off-by: Siddhesh Poyarekar <siddhesh.poyarekar@gmail.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@gmail.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Jamie Lokier <jamie@shareable.org>
Cc: Mike Frysinger <vapier@gentoo.org>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Matt Mackall <mpm@selenic.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-03-22 03:34:04 +04:00
|
|
|
REG("maps", S_IRUGO, proc_tid_maps_operations),
|
2012-06-01 03:26:43 +04:00
|
|
|
#ifdef CONFIG_CHECKPOINT_RESTORE
|
|
|
|
REG("children", S_IRUGO, proc_tid_children_operations),
|
|
|
|
#endif
|
2006-10-02 13:17:05 +04:00
|
|
|
#ifdef CONFIG_NUMA
|
procfs: mark thread stack correctly in proc/<pid>/maps
Stack for a new thread is mapped by userspace code and passed via
sys_clone. This memory is currently seen as anonymous in
/proc/<pid>/maps, which makes it difficult to ascertain which mappings
are being used for thread stacks. This patch uses the individual task
stack pointers to determine which vmas are actually thread stacks.
For a multithreaded program like the following:
#include <pthread.h>
void *thread_main(void *foo)
{
while(1);
}
int main()
{
pthread_t t;
pthread_create(&t, NULL, thread_main, NULL);
pthread_join(t, NULL);
}
proc/PID/maps looks like the following:
00400000-00401000 r-xp 00000000 fd:0a 3671804 /home/siddhesh/a.out
00600000-00601000 rw-p 00000000 fd:0a 3671804 /home/siddhesh/a.out
019ef000-01a10000 rw-p 00000000 00:00 0 [heap]
7f8a44491000-7f8a44492000 ---p 00000000 00:00 0
7f8a44492000-7f8a44c92000 rw-p 00000000 00:00 0
7f8a44c92000-7f8a44e3d000 r-xp 00000000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a44e3d000-7f8a4503d000 ---p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a4503d000-7f8a45041000 r--p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45041000-7f8a45043000 rw-p 001af000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45043000-7f8a45048000 rw-p 00000000 00:00 0
7f8a45048000-7f8a4505f000 r-xp 00000000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4505f000-7f8a4525e000 ---p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525e000-7f8a4525f000 r--p 00016000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525f000-7f8a45260000 rw-p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a45260000-7f8a45264000 rw-p 00000000 00:00 0
7f8a45264000-7f8a45286000 r-xp 00000000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45457000-7f8a4545a000 rw-p 00000000 00:00 0
7f8a45484000-7f8a45485000 rw-p 00000000 00:00 0
7f8a45485000-7f8a45486000 r--p 00021000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45486000-7f8a45487000 rw-p 00022000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45487000-7f8a45488000 rw-p 00000000 00:00 0
7fff6273b000-7fff6275c000 rw-p 00000000 00:00 0 [stack]
7fff627ff000-7fff62800000 r-xp 00000000 00:00 0 [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]
Here, one could guess that 7f8a44492000-7f8a44c92000 is a stack since
the earlier vma that has no permissions (7f8a44e3d000-7f8a4503d000) but
that is not always a reliable way to find out which vma is a thread
stack. Also, /proc/PID/maps and /proc/PID/task/TID/maps has the same
content.
With this patch in place, /proc/PID/task/TID/maps are treated as 'maps
as the task would see it' and hence, only the vma that that task uses as
stack is marked as [stack]. All other 'stack' vmas are marked as
anonymous memory. /proc/PID/maps acts as a thread group level view,
where all thread stack vmas are marked as [stack:TID] where TID is the
process ID of the task that uses that vma as stack, while the process
stack is marked as [stack].
So /proc/PID/maps will look like this:
00400000-00401000 r-xp 00000000 fd:0a 3671804 /home/siddhesh/a.out
00600000-00601000 rw-p 00000000 fd:0a 3671804 /home/siddhesh/a.out
019ef000-01a10000 rw-p 00000000 00:00 0 [heap]
7f8a44491000-7f8a44492000 ---p 00000000 00:00 0
7f8a44492000-7f8a44c92000 rw-p 00000000 00:00 0 [stack:1442]
7f8a44c92000-7f8a44e3d000 r-xp 00000000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a44e3d000-7f8a4503d000 ---p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a4503d000-7f8a45041000 r--p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45041000-7f8a45043000 rw-p 001af000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45043000-7f8a45048000 rw-p 00000000 00:00 0
7f8a45048000-7f8a4505f000 r-xp 00000000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4505f000-7f8a4525e000 ---p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525e000-7f8a4525f000 r--p 00016000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525f000-7f8a45260000 rw-p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a45260000-7f8a45264000 rw-p 00000000 00:00 0
7f8a45264000-7f8a45286000 r-xp 00000000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45457000-7f8a4545a000 rw-p 00000000 00:00 0
7f8a45484000-7f8a45485000 rw-p 00000000 00:00 0
7f8a45485000-7f8a45486000 r--p 00021000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45486000-7f8a45487000 rw-p 00022000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45487000-7f8a45488000 rw-p 00000000 00:00 0
7fff6273b000-7fff6275c000 rw-p 00000000 00:00 0 [stack]
7fff627ff000-7fff62800000 r-xp 00000000 00:00 0 [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]
Thus marking all vmas that are used as stacks by the threads in the
thread group along with the process stack. The task level maps will
however like this:
00400000-00401000 r-xp 00000000 fd:0a 3671804 /home/siddhesh/a.out
00600000-00601000 rw-p 00000000 fd:0a 3671804 /home/siddhesh/a.out
019ef000-01a10000 rw-p 00000000 00:00 0 [heap]
7f8a44491000-7f8a44492000 ---p 00000000 00:00 0
7f8a44492000-7f8a44c92000 rw-p 00000000 00:00 0 [stack]
7f8a44c92000-7f8a44e3d000 r-xp 00000000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a44e3d000-7f8a4503d000 ---p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a4503d000-7f8a45041000 r--p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45041000-7f8a45043000 rw-p 001af000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45043000-7f8a45048000 rw-p 00000000 00:00 0
7f8a45048000-7f8a4505f000 r-xp 00000000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4505f000-7f8a4525e000 ---p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525e000-7f8a4525f000 r--p 00016000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525f000-7f8a45260000 rw-p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a45260000-7f8a45264000 rw-p 00000000 00:00 0
7f8a45264000-7f8a45286000 r-xp 00000000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45457000-7f8a4545a000 rw-p 00000000 00:00 0
7f8a45484000-7f8a45485000 rw-p 00000000 00:00 0
7f8a45485000-7f8a45486000 r--p 00021000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45486000-7f8a45487000 rw-p 00022000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45487000-7f8a45488000 rw-p 00000000 00:00 0
7fff6273b000-7fff6275c000 rw-p 00000000 00:00 0
7fff627ff000-7fff62800000 r-xp 00000000 00:00 0 [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]
where only the vma that is being used as a stack by *that* task is
marked as [stack].
Analogous changes have been made to /proc/PID/smaps,
/proc/PID/numa_maps, /proc/PID/task/TID/smaps and
/proc/PID/task/TID/numa_maps. Relevant snippets from smaps and
numa_maps:
[siddhesh@localhost ~ ]$ pgrep a.out
1441
[siddhesh@localhost ~ ]$ cat /proc/1441/smaps | grep "\[stack"
7f8a44492000-7f8a44c92000 rw-p 00000000 00:00 0 [stack:1442]
7fff6273b000-7fff6275c000 rw-p 00000000 00:00 0 [stack]
[siddhesh@localhost ~ ]$ cat /proc/1441/task/1442/smaps | grep "\[stack"
7f8a44492000-7f8a44c92000 rw-p 00000000 00:00 0 [stack]
[siddhesh@localhost ~ ]$ cat /proc/1441/task/1441/smaps | grep "\[stack"
7fff6273b000-7fff6275c000 rw-p 00000000 00:00 0 [stack]
[siddhesh@localhost ~ ]$ cat /proc/1441/numa_maps | grep "stack"
7f8a44492000 default stack:1442 anon=2 dirty=2 N0=2
7fff6273a000 default stack anon=3 dirty=3 N0=3
[siddhesh@localhost ~ ]$ cat /proc/1441/task/1442/numa_maps | grep "stack"
7f8a44492000 default stack anon=2 dirty=2 N0=2
[siddhesh@localhost ~ ]$ cat /proc/1441/task/1441/numa_maps | grep "stack"
7fff6273a000 default stack anon=3 dirty=3 N0=3
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix build]
Signed-off-by: Siddhesh Poyarekar <siddhesh.poyarekar@gmail.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@gmail.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Jamie Lokier <jamie@shareable.org>
Cc: Mike Frysinger <vapier@gentoo.org>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Matt Mackall <mpm@selenic.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-03-22 03:34:04 +04:00
|
|
|
REG("numa_maps", S_IRUGO, proc_tid_numa_maps_operations),
|
2006-10-02 13:17:05 +04:00
|
|
|
#endif
|
2008-11-10 01:32:52 +03:00
|
|
|
REG("mem", S_IRUSR|S_IWUSR, proc_mem_operations),
|
|
|
|
LNK("cwd", proc_cwd_link),
|
|
|
|
LNK("root", proc_root_link),
|
|
|
|
LNK("exe", proc_exe_link),
|
|
|
|
REG("mounts", S_IRUGO, proc_mounts_operations),
|
|
|
|
REG("mountinfo", S_IRUGO, proc_mountinfo_operations),
|
2008-02-05 09:29:07 +03:00
|
|
|
#ifdef CONFIG_PROC_PAGE_MONITOR
|
2008-11-10 01:32:52 +03:00
|
|
|
REG("clear_refs", S_IWUSR, proc_clear_refs_operations),
|
procfs: mark thread stack correctly in proc/<pid>/maps
Stack for a new thread is mapped by userspace code and passed via
sys_clone. This memory is currently seen as anonymous in
/proc/<pid>/maps, which makes it difficult to ascertain which mappings
are being used for thread stacks. This patch uses the individual task
stack pointers to determine which vmas are actually thread stacks.
For a multithreaded program like the following:
#include <pthread.h>
void *thread_main(void *foo)
{
while(1);
}
int main()
{
pthread_t t;
pthread_create(&t, NULL, thread_main, NULL);
pthread_join(t, NULL);
}
proc/PID/maps looks like the following:
00400000-00401000 r-xp 00000000 fd:0a 3671804 /home/siddhesh/a.out
00600000-00601000 rw-p 00000000 fd:0a 3671804 /home/siddhesh/a.out
019ef000-01a10000 rw-p 00000000 00:00 0 [heap]
7f8a44491000-7f8a44492000 ---p 00000000 00:00 0
7f8a44492000-7f8a44c92000 rw-p 00000000 00:00 0
7f8a44c92000-7f8a44e3d000 r-xp 00000000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a44e3d000-7f8a4503d000 ---p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a4503d000-7f8a45041000 r--p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45041000-7f8a45043000 rw-p 001af000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45043000-7f8a45048000 rw-p 00000000 00:00 0
7f8a45048000-7f8a4505f000 r-xp 00000000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4505f000-7f8a4525e000 ---p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525e000-7f8a4525f000 r--p 00016000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525f000-7f8a45260000 rw-p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a45260000-7f8a45264000 rw-p 00000000 00:00 0
7f8a45264000-7f8a45286000 r-xp 00000000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45457000-7f8a4545a000 rw-p 00000000 00:00 0
7f8a45484000-7f8a45485000 rw-p 00000000 00:00 0
7f8a45485000-7f8a45486000 r--p 00021000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45486000-7f8a45487000 rw-p 00022000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45487000-7f8a45488000 rw-p 00000000 00:00 0
7fff6273b000-7fff6275c000 rw-p 00000000 00:00 0 [stack]
7fff627ff000-7fff62800000 r-xp 00000000 00:00 0 [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]
Here, one could guess that 7f8a44492000-7f8a44c92000 is a stack since
the earlier vma that has no permissions (7f8a44e3d000-7f8a4503d000) but
that is not always a reliable way to find out which vma is a thread
stack. Also, /proc/PID/maps and /proc/PID/task/TID/maps has the same
content.
With this patch in place, /proc/PID/task/TID/maps are treated as 'maps
as the task would see it' and hence, only the vma that that task uses as
stack is marked as [stack]. All other 'stack' vmas are marked as
anonymous memory. /proc/PID/maps acts as a thread group level view,
where all thread stack vmas are marked as [stack:TID] where TID is the
process ID of the task that uses that vma as stack, while the process
stack is marked as [stack].
So /proc/PID/maps will look like this:
00400000-00401000 r-xp 00000000 fd:0a 3671804 /home/siddhesh/a.out
00600000-00601000 rw-p 00000000 fd:0a 3671804 /home/siddhesh/a.out
019ef000-01a10000 rw-p 00000000 00:00 0 [heap]
7f8a44491000-7f8a44492000 ---p 00000000 00:00 0
7f8a44492000-7f8a44c92000 rw-p 00000000 00:00 0 [stack:1442]
7f8a44c92000-7f8a44e3d000 r-xp 00000000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a44e3d000-7f8a4503d000 ---p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a4503d000-7f8a45041000 r--p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45041000-7f8a45043000 rw-p 001af000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45043000-7f8a45048000 rw-p 00000000 00:00 0
7f8a45048000-7f8a4505f000 r-xp 00000000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4505f000-7f8a4525e000 ---p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525e000-7f8a4525f000 r--p 00016000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525f000-7f8a45260000 rw-p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a45260000-7f8a45264000 rw-p 00000000 00:00 0
7f8a45264000-7f8a45286000 r-xp 00000000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45457000-7f8a4545a000 rw-p 00000000 00:00 0
7f8a45484000-7f8a45485000 rw-p 00000000 00:00 0
7f8a45485000-7f8a45486000 r--p 00021000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45486000-7f8a45487000 rw-p 00022000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45487000-7f8a45488000 rw-p 00000000 00:00 0
7fff6273b000-7fff6275c000 rw-p 00000000 00:00 0 [stack]
7fff627ff000-7fff62800000 r-xp 00000000 00:00 0 [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]
Thus marking all vmas that are used as stacks by the threads in the
thread group along with the process stack. The task level maps will
however like this:
00400000-00401000 r-xp 00000000 fd:0a 3671804 /home/siddhesh/a.out
00600000-00601000 rw-p 00000000 fd:0a 3671804 /home/siddhesh/a.out
019ef000-01a10000 rw-p 00000000 00:00 0 [heap]
7f8a44491000-7f8a44492000 ---p 00000000 00:00 0
7f8a44492000-7f8a44c92000 rw-p 00000000 00:00 0 [stack]
7f8a44c92000-7f8a44e3d000 r-xp 00000000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a44e3d000-7f8a4503d000 ---p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a4503d000-7f8a45041000 r--p 001ab000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45041000-7f8a45043000 rw-p 001af000 fd:00 2097482 /lib64/libc-2.14.90.so
7f8a45043000-7f8a45048000 rw-p 00000000 00:00 0
7f8a45048000-7f8a4505f000 r-xp 00000000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4505f000-7f8a4525e000 ---p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525e000-7f8a4525f000 r--p 00016000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a4525f000-7f8a45260000 rw-p 00017000 fd:00 2099938 /lib64/libpthread-2.14.90.so
7f8a45260000-7f8a45264000 rw-p 00000000 00:00 0
7f8a45264000-7f8a45286000 r-xp 00000000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45457000-7f8a4545a000 rw-p 00000000 00:00 0
7f8a45484000-7f8a45485000 rw-p 00000000 00:00 0
7f8a45485000-7f8a45486000 r--p 00021000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45486000-7f8a45487000 rw-p 00022000 fd:00 2097348 /lib64/ld-2.14.90.so
7f8a45487000-7f8a45488000 rw-p 00000000 00:00 0
7fff6273b000-7fff6275c000 rw-p 00000000 00:00 0
7fff627ff000-7fff62800000 r-xp 00000000 00:00 0 [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]
where only the vma that is being used as a stack by *that* task is
marked as [stack].
Analogous changes have been made to /proc/PID/smaps,
/proc/PID/numa_maps, /proc/PID/task/TID/smaps and
/proc/PID/task/TID/numa_maps. Relevant snippets from smaps and
numa_maps:
[siddhesh@localhost ~ ]$ pgrep a.out
1441
[siddhesh@localhost ~ ]$ cat /proc/1441/smaps | grep "\[stack"
7f8a44492000-7f8a44c92000 rw-p 00000000 00:00 0 [stack:1442]
7fff6273b000-7fff6275c000 rw-p 00000000 00:00 0 [stack]
[siddhesh@localhost ~ ]$ cat /proc/1441/task/1442/smaps | grep "\[stack"
7f8a44492000-7f8a44c92000 rw-p 00000000 00:00 0 [stack]
[siddhesh@localhost ~ ]$ cat /proc/1441/task/1441/smaps | grep "\[stack"
7fff6273b000-7fff6275c000 rw-p 00000000 00:00 0 [stack]
[siddhesh@localhost ~ ]$ cat /proc/1441/numa_maps | grep "stack"
7f8a44492000 default stack:1442 anon=2 dirty=2 N0=2
7fff6273a000 default stack anon=3 dirty=3 N0=3
[siddhesh@localhost ~ ]$ cat /proc/1441/task/1442/numa_maps | grep "stack"
7f8a44492000 default stack anon=2 dirty=2 N0=2
[siddhesh@localhost ~ ]$ cat /proc/1441/task/1441/numa_maps | grep "stack"
7fff6273a000 default stack anon=3 dirty=3 N0=3
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix build]
Signed-off-by: Siddhesh Poyarekar <siddhesh.poyarekar@gmail.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@gmail.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Jamie Lokier <jamie@shareable.org>
Cc: Mike Frysinger <vapier@gentoo.org>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Matt Mackall <mpm@selenic.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-03-22 03:34:04 +04:00
|
|
|
REG("smaps", S_IRUGO, proc_tid_smaps_operations),
|
2014-04-08 02:38:38 +04:00
|
|
|
REG("pagemap", S_IRUSR, proc_pagemap_operations),
|
2006-10-02 13:17:05 +04:00
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_SECURITY
|
2008-11-10 01:32:52 +03:00
|
|
|
DIR("attr", S_IRUGO|S_IXUGO, proc_attr_dir_inode_operations, proc_attr_dir_operations),
|
2006-10-02 13:17:05 +04:00
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_KALLSYMS
|
2014-08-09 01:21:44 +04:00
|
|
|
ONE("wchan", S_IRUGO, proc_pid_wchan),
|
2006-10-02 13:17:05 +04:00
|
|
|
#endif
|
2008-11-10 11:26:08 +03:00
|
|
|
#ifdef CONFIG_STACKTRACE
|
2014-04-08 02:38:36 +04:00
|
|
|
ONE("stack", S_IRUSR, proc_pid_stack),
|
2006-10-02 13:17:05 +04:00
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_SCHEDSTATS
|
2014-08-09 01:21:46 +04:00
|
|
|
ONE("schedstat", S_IRUGO, proc_pid_schedstat),
|
2006-10-02 13:17:05 +04:00
|
|
|
#endif
|
2008-01-25 23:08:34 +03:00
|
|
|
#ifdef CONFIG_LATENCYTOP
|
2008-11-10 01:32:52 +03:00
|
|
|
REG("latency", S_IRUGO, proc_lstats_operations),
|
2008-01-25 23:08:34 +03:00
|
|
|
#endif
|
2007-10-19 10:39:39 +04:00
|
|
|
#ifdef CONFIG_PROC_PID_CPUSET
|
2014-09-18 12:03:36 +04:00
|
|
|
ONE("cpuset", S_IRUGO, proc_cpuset_show),
|
2007-10-19 10:39:35 +04:00
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_CGROUPS
|
2014-09-18 12:03:15 +04:00
|
|
|
ONE("cgroup", S_IRUGO, proc_cgroup_show),
|
2006-10-02 13:17:05 +04:00
|
|
|
#endif
|
2014-08-09 01:21:48 +04:00
|
|
|
ONE("oom_score", S_IRUGO, proc_oom_score),
|
2012-11-13 05:53:04 +04:00
|
|
|
REG("oom_adj", S_IRUGO|S_IWUSR, proc_oom_adj_operations),
|
oom: badness heuristic rewrite
This a complete rewrite of the oom killer's badness() heuristic which is
used to determine which task to kill in oom conditions. The goal is to
make it as simple and predictable as possible so the results are better
understood and we end up killing the task which will lead to the most
memory freeing while still respecting the fine-tuning from userspace.
Instead of basing the heuristic on mm->total_vm for each task, the task's
rss and swap space is used instead. This is a better indication of the
amount of memory that will be freeable if the oom killed task is chosen
and subsequently exits. This helps specifically in cases where KDE or
GNOME is chosen for oom kill on desktop systems instead of a memory
hogging task.
The baseline for the heuristic is a proportion of memory that each task is
currently using in memory plus swap compared to the amount of "allowable"
memory. "Allowable," in this sense, means the system-wide resources for
unconstrained oom conditions, the set of mempolicy nodes, the mems
attached to current's cpuset, or a memory controller's limit. The
proportion is given on a scale of 0 (never kill) to 1000 (always kill),
roughly meaning that if a task has a badness() score of 500 that the task
consumes approximately 50% of allowable memory resident in RAM or in swap
space.
The proportion is always relative to the amount of "allowable" memory and
not the total amount of RAM systemwide so that mempolicies and cpusets may
operate in isolation; they shall not need to know the true size of the
machine on which they are running if they are bound to a specific set of
nodes or mems, respectively.
Root tasks are given 3% extra memory just like __vm_enough_memory()
provides in LSMs. In the event of two tasks consuming similar amounts of
memory, it is generally better to save root's task.
Because of the change in the badness() heuristic's baseline, it is also
necessary to introduce a new user interface to tune it. It's not possible
to redefine the meaning of /proc/pid/oom_adj with a new scale since the
ABI cannot be changed for backward compatability. Instead, a new tunable,
/proc/pid/oom_score_adj, is added that ranges from -1000 to +1000. It may
be used to polarize the heuristic such that certain tasks are never
considered for oom kill while others may always be considered. The value
is added directly into the badness() score so a value of -500, for
example, means to discount 50% of its memory consumption in comparison to
other tasks either on the system, bound to the mempolicy, in the cpuset,
or sharing the same memory controller.
/proc/pid/oom_adj is changed so that its meaning is rescaled into the
units used by /proc/pid/oom_score_adj, and vice versa. Changing one of
these per-task tunables will rescale the value of the other to an
equivalent meaning. Although /proc/pid/oom_adj was originally defined as
a bitshift on the badness score, it now shares the same linear growth as
/proc/pid/oom_score_adj but with different granularity. This is required
so the ABI is not broken with userspace applications and allows oom_adj to
be deprecated for future removal.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-08-10 04:19:46 +04:00
|
|
|
REG("oom_score_adj", S_IRUGO|S_IWUSR, proc_oom_score_adj_operations),
|
2006-10-02 13:17:05 +04:00
|
|
|
#ifdef CONFIG_AUDITSYSCALL
|
2008-11-10 01:32:52 +03:00
|
|
|
REG("loginuid", S_IWUSR|S_IRUGO, proc_loginuid_operations),
|
2011-02-16 05:24:05 +03:00
|
|
|
REG("sessionid", S_IRUGO, proc_sessionid_operations),
|
2006-10-02 13:17:05 +04:00
|
|
|
#endif
|
2006-12-08 13:39:47 +03:00
|
|
|
#ifdef CONFIG_FAULT_INJECTION
|
2008-11-10 01:32:52 +03:00
|
|
|
REG("make-it-fail", S_IRUGO|S_IWUSR, proc_fault_inject_operations),
|
2006-12-08 13:39:47 +03:00
|
|
|
#endif
|
2008-07-25 12:48:49 +04:00
|
|
|
#ifdef CONFIG_TASK_IO_ACCOUNTING
|
2014-08-09 01:21:50 +04:00
|
|
|
ONE("io", S_IRUSR, proc_tid_io_accounting),
|
2008-07-25 12:48:49 +04:00
|
|
|
#endif
|
arch/tile: more /proc and /sys file support
This change introduces a few of the less controversial /proc and
/proc/sys interfaces for tile, along with sysfs attributes for
various things that were originally proposed as /proc/tile files.
It also adjusts the "hardwall" proc API.
Arnd Bergmann reviewed the initial arch/tile submission, which
included a complete set of all the /proc/tile and /proc/sys/tile
knobs that we had added in a somewhat ad hoc way during initial
development, and provided feedback on where most of them should go.
One knob turned out to be similar enough to the existing
/proc/sys/debug/exception-trace that it was re-implemented to use
that model instead.
Another knob was /proc/tile/grid, which reported the "grid" dimensions
of a tile chip (e.g. 8x8 processors = 64-core chip). Arnd suggested
looking at sysfs for that, so this change moves that information
to a pair of sysfs attributes (chip_width and chip_height) in the
/sys/devices/system/cpu directory. We also put the "chip_serial"
and "chip_revision" information from our old /proc/tile/board file
as attributes in /sys/devices/system/cpu.
Other information collected via hypervisor APIs is now placed in
/sys/hypervisor. We create a /sys/hypervisor/type file (holding the
constant string "tilera") to be parallel with the Xen use of
/sys/hypervisor/type holding "xen". We create three top-level files,
"version" (the hypervisor's own version), "config_version" (the
version of the configuration file), and "hvconfig" (the contents of
the configuration file). The remaining information from our old
/proc/tile/board and /proc/tile/switch files becomes an attribute
group appearing under /sys/hypervisor/board/.
Finally, after some feedback from Arnd Bergmann for the previous
version of this patch, the /proc/tile/hardwall file is split up into
two conceptual parts. First, a directory /proc/tile/hardwall/ which
contains one file per active hardwall, each file named after the
hardwall's ID and holding a cpulist that says which cpus are enclosed by
the hardwall. Second, a /proc/PID file "hardwall" that is either
empty (for non-hardwall-using processes) or contains the hardwall ID.
Finally, this change pushes the /proc/sys/tile/unaligned_fixup/
directory, with knobs controlling the kernel code for handling the
fixup of unaligned exceptions.
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
2011-05-26 20:40:09 +04:00
|
|
|
#ifdef CONFIG_HARDWALL
|
2014-08-09 01:21:52 +04:00
|
|
|
ONE("hardwall", S_IRUGO, proc_pid_hardwall),
|
arch/tile: more /proc and /sys file support
This change introduces a few of the less controversial /proc and
/proc/sys interfaces for tile, along with sysfs attributes for
various things that were originally proposed as /proc/tile files.
It also adjusts the "hardwall" proc API.
Arnd Bergmann reviewed the initial arch/tile submission, which
included a complete set of all the /proc/tile and /proc/sys/tile
knobs that we had added in a somewhat ad hoc way during initial
development, and provided feedback on where most of them should go.
One knob turned out to be similar enough to the existing
/proc/sys/debug/exception-trace that it was re-implemented to use
that model instead.
Another knob was /proc/tile/grid, which reported the "grid" dimensions
of a tile chip (e.g. 8x8 processors = 64-core chip). Arnd suggested
looking at sysfs for that, so this change moves that information
to a pair of sysfs attributes (chip_width and chip_height) in the
/sys/devices/system/cpu directory. We also put the "chip_serial"
and "chip_revision" information from our old /proc/tile/board file
as attributes in /sys/devices/system/cpu.
Other information collected via hypervisor APIs is now placed in
/sys/hypervisor. We create a /sys/hypervisor/type file (holding the
constant string "tilera") to be parallel with the Xen use of
/sys/hypervisor/type holding "xen". We create three top-level files,
"version" (the hypervisor's own version), "config_version" (the
version of the configuration file), and "hvconfig" (the contents of
the configuration file). The remaining information from our old
/proc/tile/board and /proc/tile/switch files becomes an attribute
group appearing under /sys/hypervisor/board/.
Finally, after some feedback from Arnd Bergmann for the previous
version of this patch, the /proc/tile/hardwall file is split up into
two conceptual parts. First, a directory /proc/tile/hardwall/ which
contains one file per active hardwall, each file named after the
hardwall's ID and holding a cpulist that says which cpus are enclosed by
the hardwall. Second, a /proc/PID file "hardwall" that is either
empty (for non-hardwall-using processes) or contains the hardwall ID.
Finally, this change pushes the /proc/sys/tile/unaligned_fixup/
directory, with knobs controlling the kernel code for handling the
fixup of unaligned exceptions.
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
2011-05-26 20:40:09 +04:00
|
|
|
#endif
|
2011-11-17 12:11:58 +04:00
|
|
|
#ifdef CONFIG_USER_NS
|
|
|
|
REG("uid_map", S_IRUGO|S_IWUSR, proc_uid_map_operations),
|
|
|
|
REG("gid_map", S_IRUGO|S_IWUSR, proc_gid_map_operations),
|
2012-08-30 12:24:05 +04:00
|
|
|
REG("projid_map", S_IRUGO|S_IWUSR, proc_projid_map_operations),
|
2014-12-02 21:27:26 +03:00
|
|
|
REG("setgroups", S_IRUGO|S_IWUSR, proc_setgroups_operations),
|
2011-11-17 12:11:58 +04:00
|
|
|
#endif
|
2006-10-02 13:17:05 +04:00
|
|
|
};
|
|
|
|
|
2013-05-16 20:07:31 +04:00
|
|
|
static int proc_tid_base_readdir(struct file *file, struct dir_context *ctx)
|
2006-10-02 13:17:05 +04:00
|
|
|
{
|
2013-05-16 20:07:31 +04:00
|
|
|
return proc_pident_readdir(file, ctx,
|
|
|
|
tid_base_stuff, ARRAY_SIZE(tid_base_stuff));
|
2006-10-02 13:17:05 +04:00
|
|
|
}
|
|
|
|
|
2012-06-11 01:13:09 +04:00
|
|
|
static struct dentry *proc_tid_base_lookup(struct inode *dir, struct dentry *dentry, unsigned int flags)
|
|
|
|
{
|
2006-10-02 13:18:56 +04:00
|
|
|
return proc_pident_lookup(dir, dentry,
|
|
|
|
tid_base_stuff, ARRAY_SIZE(tid_base_stuff));
|
2006-10-02 13:17:05 +04:00
|
|
|
}
|
|
|
|
|
2007-02-12 11:55:34 +03:00
|
|
|
static const struct file_operations proc_tid_base_operations = {
|
2006-10-02 13:17:05 +04:00
|
|
|
.read = generic_read_dir,
|
2013-05-16 20:07:31 +04:00
|
|
|
.iterate = proc_tid_base_readdir,
|
llseek: automatically add .llseek fop
All file_operations should get a .llseek operation so we can make
nonseekable_open the default for future file operations without a
.llseek pointer.
The three cases that we can automatically detect are no_llseek, seq_lseek
and default_llseek. For cases where we can we can automatically prove that
the file offset is always ignored, we use noop_llseek, which maintains
the current behavior of not returning an error from a seek.
New drivers should normally not use noop_llseek but instead use no_llseek
and call nonseekable_open at open time. Existing drivers can be converted
to do the same when the maintainer knows for certain that no user code
relies on calling seek on the device file.
The generated code is often incorrectly indented and right now contains
comments that clarify for each added line why a specific variant was
chosen. In the version that gets submitted upstream, the comments will
be gone and I will manually fix the indentation, because there does not
seem to be a way to do that using coccinelle.
Some amount of new code is currently sitting in linux-next that should get
the same modifications, which I will do at the end of the merge window.
Many thanks to Julia Lawall for helping me learn to write a semantic
patch that does all this.
===== begin semantic patch =====
// This adds an llseek= method to all file operations,
// as a preparation for making no_llseek the default.
//
// The rules are
// - use no_llseek explicitly if we do nonseekable_open
// - use seq_lseek for sequential files
// - use default_llseek if we know we access f_pos
// - use noop_llseek if we know we don't access f_pos,
// but we still want to allow users to call lseek
//
@ open1 exists @
identifier nested_open;
@@
nested_open(...)
{
<+...
nonseekable_open(...)
...+>
}
@ open exists@
identifier open_f;
identifier i, f;
identifier open1.nested_open;
@@
int open_f(struct inode *i, struct file *f)
{
<+...
(
nonseekable_open(...)
|
nested_open(...)
)
...+>
}
@ read disable optional_qualifier exists @
identifier read_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
expression E;
identifier func;
@@
ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
{
<+...
(
*off = E
|
*off += E
|
func(..., off, ...)
|
E = *off
)
...+>
}
@ read_no_fpos disable optional_qualifier exists @
identifier read_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
@@
ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
{
... when != off
}
@ write @
identifier write_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
expression E;
identifier func;
@@
ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
{
<+...
(
*off = E
|
*off += E
|
func(..., off, ...)
|
E = *off
)
...+>
}
@ write_no_fpos @
identifier write_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
@@
ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
{
... when != off
}
@ fops0 @
identifier fops;
@@
struct file_operations fops = {
...
};
@ has_llseek depends on fops0 @
identifier fops0.fops;
identifier llseek_f;
@@
struct file_operations fops = {
...
.llseek = llseek_f,
...
};
@ has_read depends on fops0 @
identifier fops0.fops;
identifier read_f;
@@
struct file_operations fops = {
...
.read = read_f,
...
};
@ has_write depends on fops0 @
identifier fops0.fops;
identifier write_f;
@@
struct file_operations fops = {
...
.write = write_f,
...
};
@ has_open depends on fops0 @
identifier fops0.fops;
identifier open_f;
@@
struct file_operations fops = {
...
.open = open_f,
...
};
// use no_llseek if we call nonseekable_open
////////////////////////////////////////////
@ nonseekable1 depends on !has_llseek && has_open @
identifier fops0.fops;
identifier nso ~= "nonseekable_open";
@@
struct file_operations fops = {
... .open = nso, ...
+.llseek = no_llseek, /* nonseekable */
};
@ nonseekable2 depends on !has_llseek @
identifier fops0.fops;
identifier open.open_f;
@@
struct file_operations fops = {
... .open = open_f, ...
+.llseek = no_llseek, /* open uses nonseekable */
};
// use seq_lseek for sequential files
/////////////////////////////////////
@ seq depends on !has_llseek @
identifier fops0.fops;
identifier sr ~= "seq_read";
@@
struct file_operations fops = {
... .read = sr, ...
+.llseek = seq_lseek, /* we have seq_read */
};
// use default_llseek if there is a readdir
///////////////////////////////////////////
@ fops1 depends on !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier readdir_e;
@@
// any other fop is used that changes pos
struct file_operations fops = {
... .readdir = readdir_e, ...
+.llseek = default_llseek, /* readdir is present */
};
// use default_llseek if at least one of read/write touches f_pos
/////////////////////////////////////////////////////////////////
@ fops2 depends on !fops1 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read.read_f;
@@
// read fops use offset
struct file_operations fops = {
... .read = read_f, ...
+.llseek = default_llseek, /* read accesses f_pos */
};
@ fops3 depends on !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier write.write_f;
@@
// write fops use offset
struct file_operations fops = {
... .write = write_f, ...
+ .llseek = default_llseek, /* write accesses f_pos */
};
// Use noop_llseek if neither read nor write accesses f_pos
///////////////////////////////////////////////////////////
@ fops4 depends on !fops1 && !fops2 && !fops3 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read_no_fpos.read_f;
identifier write_no_fpos.write_f;
@@
// write fops use offset
struct file_operations fops = {
...
.write = write_f,
.read = read_f,
...
+.llseek = noop_llseek, /* read and write both use no f_pos */
};
@ depends on has_write && !has_read && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier write_no_fpos.write_f;
@@
struct file_operations fops = {
... .write = write_f, ...
+.llseek = noop_llseek, /* write uses no f_pos */
};
@ depends on has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read_no_fpos.read_f;
@@
struct file_operations fops = {
... .read = read_f, ...
+.llseek = noop_llseek, /* read uses no f_pos */
};
@ depends on !has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
@@
struct file_operations fops = {
...
+.llseek = noop_llseek, /* no read or write fn */
};
===== End semantic patch =====
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Cc: Julia Lawall <julia@diku.dk>
Cc: Christoph Hellwig <hch@infradead.org>
2010-08-15 20:52:59 +04:00
|
|
|
.llseek = default_llseek,
|
2006-10-02 13:17:05 +04:00
|
|
|
};
|
|
|
|
|
2007-02-12 11:55:40 +03:00
|
|
|
static const struct inode_operations proc_tid_base_inode_operations = {
|
2006-10-02 13:17:05 +04:00
|
|
|
.lookup = proc_tid_base_lookup,
|
|
|
|
.getattr = pid_getattr,
|
|
|
|
.setattr = proc_setattr,
|
|
|
|
};
|
|
|
|
|
2013-06-15 11:15:20 +04:00
|
|
|
static int proc_task_instantiate(struct inode *dir,
|
2007-05-08 11:26:15 +04:00
|
|
|
struct dentry *dentry, struct task_struct *task, const void *ptr)
|
2006-10-02 13:18:49 +04:00
|
|
|
{
|
|
|
|
struct inode *inode;
|
2006-10-02 13:18:49 +04:00
|
|
|
inode = proc_pid_make_inode(dir->i_sb, task);
|
2006-10-02 13:18:49 +04:00
|
|
|
|
|
|
|
if (!inode)
|
|
|
|
goto out;
|
|
|
|
inode->i_mode = S_IFDIR|S_IRUGO|S_IXUGO;
|
|
|
|
inode->i_op = &proc_tid_base_inode_operations;
|
|
|
|
inode->i_fop = &proc_tid_base_operations;
|
|
|
|
inode->i_flags|=S_IMMUTABLE;
|
2008-06-06 09:46:53 +04:00
|
|
|
|
2011-10-28 16:13:29 +04:00
|
|
|
set_nlink(inode, 2 + pid_entry_count_dirs(tid_base_stuff,
|
|
|
|
ARRAY_SIZE(tid_base_stuff)));
|
2006-10-02 13:18:49 +04:00
|
|
|
|
2011-01-07 09:49:55 +03:00
|
|
|
d_set_d_op(dentry, &pid_dentry_operations);
|
2006-10-02 13:18:49 +04:00
|
|
|
|
|
|
|
d_add(dentry, inode);
|
|
|
|
/* Close the race of the process dying before we return the dentry */
|
2012-06-11 00:03:43 +04:00
|
|
|
if (pid_revalidate(dentry, 0))
|
2013-06-15 11:15:20 +04:00
|
|
|
return 0;
|
2006-10-02 13:18:49 +04:00
|
|
|
out:
|
2013-06-15 11:15:20 +04:00
|
|
|
return -ENOENT;
|
2006-10-02 13:18:49 +04:00
|
|
|
}
|
|
|
|
|
2012-06-11 01:13:09 +04:00
|
|
|
static struct dentry *proc_task_lookup(struct inode *dir, struct dentry * dentry, unsigned int flags)
|
2006-10-02 13:17:05 +04:00
|
|
|
{
|
2013-06-15 11:15:20 +04:00
|
|
|
int result = -ENOENT;
|
2006-10-02 13:17:05 +04:00
|
|
|
struct task_struct *task;
|
|
|
|
struct task_struct *leader = get_proc_task(dir);
|
|
|
|
unsigned tid;
|
2007-10-19 10:40:14 +04:00
|
|
|
struct pid_namespace *ns;
|
2006-10-02 13:17:05 +04:00
|
|
|
|
|
|
|
if (!leader)
|
|
|
|
goto out_no_task;
|
|
|
|
|
2014-08-09 01:21:25 +04:00
|
|
|
tid = name_to_int(&dentry->d_name);
|
2006-10-02 13:17:05 +04:00
|
|
|
if (tid == ~0U)
|
|
|
|
goto out;
|
|
|
|
|
2007-10-19 10:40:14 +04:00
|
|
|
ns = dentry->d_sb->s_fs_info;
|
2006-10-02 13:17:05 +04:00
|
|
|
rcu_read_lock();
|
2007-10-19 10:40:14 +04:00
|
|
|
task = find_task_by_pid_ns(tid, ns);
|
2006-10-02 13:17:05 +04:00
|
|
|
if (task)
|
|
|
|
get_task_struct(task);
|
|
|
|
rcu_read_unlock();
|
|
|
|
if (!task)
|
|
|
|
goto out;
|
2007-10-19 10:40:18 +04:00
|
|
|
if (!same_thread_group(leader, task))
|
2006-10-02 13:17:05 +04:00
|
|
|
goto out_drop_task;
|
|
|
|
|
2006-10-02 13:18:49 +04:00
|
|
|
result = proc_task_instantiate(dir, dentry, task, NULL);
|
2006-10-02 13:17:05 +04:00
|
|
|
out_drop_task:
|
|
|
|
put_task_struct(task);
|
|
|
|
out:
|
|
|
|
put_task_struct(leader);
|
|
|
|
out_no_task:
|
2013-06-15 11:15:20 +04:00
|
|
|
return ERR_PTR(result);
|
2006-10-02 13:17:05 +04:00
|
|
|
}
|
|
|
|
|
2006-06-26 11:25:50 +04:00
|
|
|
/*
|
|
|
|
* Find the first tid of a thread group to return to user space.
|
|
|
|
*
|
|
|
|
* Usually this is just the thread group leader, but if the users
|
|
|
|
* buffer was too small or there was a seek into the middle of the
|
|
|
|
* directory we have more work todo.
|
|
|
|
*
|
|
|
|
* In the case of a short read we start with find_task_by_pid.
|
|
|
|
*
|
|
|
|
* In the case of a seek we start with the leader and walk nr
|
|
|
|
* threads past it.
|
|
|
|
*/
|
2014-01-24 03:55:40 +04:00
|
|
|
static struct task_struct *first_tid(struct pid *pid, int tid, loff_t f_pos,
|
|
|
|
struct pid_namespace *ns)
|
2006-06-26 11:25:50 +04:00
|
|
|
{
|
2014-01-24 03:55:39 +04:00
|
|
|
struct task_struct *pos, *task;
|
2014-01-24 03:55:40 +04:00
|
|
|
unsigned long nr = f_pos;
|
|
|
|
|
|
|
|
if (nr != f_pos) /* 32bit overflow? */
|
|
|
|
return NULL;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-06-26 11:26:01 +04:00
|
|
|
rcu_read_lock();
|
2014-01-24 03:55:39 +04:00
|
|
|
task = pid_task(pid, PIDTYPE_PID);
|
|
|
|
if (!task)
|
|
|
|
goto fail;
|
|
|
|
|
|
|
|
/* Attempt to start with the tid of a thread */
|
2014-01-24 03:55:40 +04:00
|
|
|
if (tid && nr) {
|
2007-10-19 10:40:14 +04:00
|
|
|
pos = find_task_by_pid_ns(tid, ns);
|
2014-01-24 03:55:39 +04:00
|
|
|
if (pos && same_thread_group(pos, task))
|
2006-06-26 11:26:01 +04:00
|
|
|
goto found;
|
2006-06-26 11:25:50 +04:00
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-06-26 11:25:50 +04:00
|
|
|
/* If nr exceeds the number of threads there is nothing todo */
|
2014-01-24 03:55:40 +04:00
|
|
|
if (nr >= get_nr_threads(task))
|
2014-01-24 03:55:38 +04:00
|
|
|
goto fail;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-06-26 11:26:01 +04:00
|
|
|
/* If we haven't found our starting place yet start
|
|
|
|
* with the leader and walk nr threads forward.
|
2006-06-26 11:25:50 +04:00
|
|
|
*/
|
2014-01-24 03:55:39 +04:00
|
|
|
pos = task = task->group_leader;
|
2014-01-24 03:55:38 +04:00
|
|
|
do {
|
2014-01-24 03:55:40 +04:00
|
|
|
if (!nr--)
|
2014-01-24 03:55:38 +04:00
|
|
|
goto found;
|
2014-01-24 03:55:39 +04:00
|
|
|
} while_each_thread(task, pos);
|
2014-01-24 03:55:38 +04:00
|
|
|
fail:
|
|
|
|
pos = NULL;
|
|
|
|
goto out;
|
2006-06-26 11:26:01 +04:00
|
|
|
found:
|
|
|
|
get_task_struct(pos);
|
|
|
|
out:
|
2006-06-26 11:26:01 +04:00
|
|
|
rcu_read_unlock();
|
2006-06-26 11:25:50 +04:00
|
|
|
return pos;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Find the next thread in the thread list.
|
|
|
|
* Return NULL if there is an error or no next thread.
|
|
|
|
*
|
|
|
|
* The reference to the input task_struct is released.
|
|
|
|
*/
|
|
|
|
static struct task_struct *next_tid(struct task_struct *start)
|
|
|
|
{
|
2006-06-26 11:26:02 +04:00
|
|
|
struct task_struct *pos = NULL;
|
2006-06-26 11:26:01 +04:00
|
|
|
rcu_read_lock();
|
2006-06-26 11:26:02 +04:00
|
|
|
if (pid_alive(start)) {
|
2006-06-26 11:25:50 +04:00
|
|
|
pos = next_thread(start);
|
2006-06-26 11:26:02 +04:00
|
|
|
if (thread_group_leader(pos))
|
|
|
|
pos = NULL;
|
|
|
|
else
|
|
|
|
get_task_struct(pos);
|
|
|
|
}
|
2006-06-26 11:26:01 +04:00
|
|
|
rcu_read_unlock();
|
2006-06-26 11:25:50 +04:00
|
|
|
put_task_struct(start);
|
|
|
|
return pos;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/* for the /proc/TGID/task/ directories */
|
2013-05-16 20:07:31 +04:00
|
|
|
static int proc_task_readdir(struct file *file, struct dir_context *ctx)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2014-01-24 03:55:39 +04:00
|
|
|
struct inode *inode = file_inode(file);
|
|
|
|
struct task_struct *task;
|
2007-10-19 10:40:14 +04:00
|
|
|
struct pid_namespace *ns;
|
2013-05-16 20:07:31 +04:00
|
|
|
int tid;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2014-01-24 03:55:39 +04:00
|
|
|
if (proc_inode_is_dead(inode))
|
2013-05-16 20:07:31 +04:00
|
|
|
return -ENOENT;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2013-05-16 20:07:31 +04:00
|
|
|
if (!dir_emit_dots(file, ctx))
|
2014-01-24 03:55:39 +04:00
|
|
|
return 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-06-26 11:25:50 +04:00
|
|
|
/* f_version caches the tgid value that the last readdir call couldn't
|
|
|
|
* return. lseek aka telldir automagically resets f_version to 0.
|
|
|
|
*/
|
2014-10-31 07:42:35 +03:00
|
|
|
ns = inode->i_sb->s_fs_info;
|
2013-05-16 20:07:31 +04:00
|
|
|
tid = (int)file->f_version;
|
|
|
|
file->f_version = 0;
|
2014-01-24 03:55:39 +04:00
|
|
|
for (task = first_tid(proc_pid(inode), tid, ctx->pos - 2, ns);
|
2006-06-26 11:25:50 +04:00
|
|
|
task;
|
2013-05-16 20:07:31 +04:00
|
|
|
task = next_tid(task), ctx->pos++) {
|
|
|
|
char name[PROC_NUMBUF];
|
|
|
|
int len;
|
2007-10-19 10:40:14 +04:00
|
|
|
tid = task_pid_nr_ns(task, ns);
|
2013-05-16 20:07:31 +04:00
|
|
|
len = snprintf(name, sizeof(name), "%d", tid);
|
|
|
|
if (!proc_fill_cache(file, ctx, name, len,
|
|
|
|
proc_task_instantiate, task, NULL)) {
|
2006-06-26 11:25:50 +04:00
|
|
|
/* returning this tgid failed, save it as the first
|
|
|
|
* pid for the next readir call */
|
2013-05-16 20:07:31 +04:00
|
|
|
file->f_version = (u64)tid;
|
2006-06-26 11:25:50 +04:00
|
|
|
put_task_struct(task);
|
2005-04-17 02:20:36 +04:00
|
|
|
break;
|
2006-06-26 11:25:50 +04:00
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
2014-01-24 03:55:39 +04:00
|
|
|
|
2013-05-16 20:07:31 +04:00
|
|
|
return 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
2006-06-26 11:25:47 +04:00
|
|
|
|
|
|
|
static int proc_task_getattr(struct vfsmount *mnt, struct dentry *dentry, struct kstat *stat)
|
|
|
|
{
|
|
|
|
struct inode *inode = dentry->d_inode;
|
2006-06-26 11:25:55 +04:00
|
|
|
struct task_struct *p = get_proc_task(inode);
|
2006-06-26 11:25:47 +04:00
|
|
|
generic_fillattr(inode, stat);
|
|
|
|
|
2006-06-26 11:25:55 +04:00
|
|
|
if (p) {
|
|
|
|
stat->nlink += get_nr_threads(p);
|
|
|
|
put_task_struct(p);
|
2006-06-26 11:25:47 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
2006-10-02 13:17:05 +04:00
|
|
|
|
2007-02-12 11:55:40 +03:00
|
|
|
static const struct inode_operations proc_task_inode_operations = {
|
2006-10-02 13:17:05 +04:00
|
|
|
.lookup = proc_task_lookup,
|
|
|
|
.getattr = proc_task_getattr,
|
|
|
|
.setattr = proc_setattr,
|
procfs: add hidepid= and gid= mount options
Add support for mount options to restrict access to /proc/PID/
directories. The default backward-compatible "relaxed" behaviour is left
untouched.
The first mount option is called "hidepid" and its value defines how much
info about processes we want to be available for non-owners:
hidepid=0 (default) means the old behavior - anybody may read all
world-readable /proc/PID/* files.
hidepid=1 means users may not access any /proc/<pid>/ directories, but
their own. Sensitive files like cmdline, sched*, status are now protected
against other users. As permission checking done in proc_pid_permission()
and files' permissions are left untouched, programs expecting specific
files' modes are not confused.
hidepid=2 means hidepid=1 plus all /proc/PID/ will be invisible to other
users. It doesn't mean that it hides whether a process exists (it can be
learned by other means, e.g. by kill -0 $PID), but it hides process' euid
and egid. It compicates intruder's task of gathering info about running
processes, whether some daemon runs with elevated privileges, whether
another user runs some sensitive program, whether other users run any
program at all, etc.
gid=XXX defines a group that will be able to gather all processes' info
(as in hidepid=0 mode). This group should be used instead of putting
nonroot user in sudoers file or something. However, untrusted users (like
daemons, etc.) which are not supposed to monitor the tasks in the whole
system should not be added to the group.
hidepid=1 or higher is designed to restrict access to procfs files, which
might reveal some sensitive private information like precise keystrokes
timings:
http://www.openwall.com/lists/oss-security/2011/11/05/3
hidepid=1/2 doesn't break monitoring userspace tools. ps, top, pgrep, and
conky gracefully handle EPERM/ENOENT and behave as if the current user is
the only user running processes. pstree shows the process subtree which
contains "pstree" process.
Note: the patch doesn't deal with setuid/setgid issues of keeping
preopened descriptors of procfs files (like
https://lkml.org/lkml/2011/2/7/368). We rely on that the leaked
information like the scheduling counters of setuid apps doesn't threaten
anybody's privacy - only the user started the setuid program may read the
counters.
Signed-off-by: Vasiliy Kulikov <segoon@openwall.com>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Randy Dunlap <rdunlap@xenotime.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Greg KH <greg@kroah.com>
Cc: Theodore Tso <tytso@MIT.EDU>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Cc: James Morris <jmorris@namei.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-01-11 03:11:31 +04:00
|
|
|
.permission = proc_pid_permission,
|
2006-10-02 13:17:05 +04:00
|
|
|
};
|
|
|
|
|
2007-02-12 11:55:34 +03:00
|
|
|
static const struct file_operations proc_task_operations = {
|
2006-10-02 13:17:05 +04:00
|
|
|
.read = generic_read_dir,
|
2013-05-16 20:07:31 +04:00
|
|
|
.iterate = proc_task_readdir,
|
llseek: automatically add .llseek fop
All file_operations should get a .llseek operation so we can make
nonseekable_open the default for future file operations without a
.llseek pointer.
The three cases that we can automatically detect are no_llseek, seq_lseek
and default_llseek. For cases where we can we can automatically prove that
the file offset is always ignored, we use noop_llseek, which maintains
the current behavior of not returning an error from a seek.
New drivers should normally not use noop_llseek but instead use no_llseek
and call nonseekable_open at open time. Existing drivers can be converted
to do the same when the maintainer knows for certain that no user code
relies on calling seek on the device file.
The generated code is often incorrectly indented and right now contains
comments that clarify for each added line why a specific variant was
chosen. In the version that gets submitted upstream, the comments will
be gone and I will manually fix the indentation, because there does not
seem to be a way to do that using coccinelle.
Some amount of new code is currently sitting in linux-next that should get
the same modifications, which I will do at the end of the merge window.
Many thanks to Julia Lawall for helping me learn to write a semantic
patch that does all this.
===== begin semantic patch =====
// This adds an llseek= method to all file operations,
// as a preparation for making no_llseek the default.
//
// The rules are
// - use no_llseek explicitly if we do nonseekable_open
// - use seq_lseek for sequential files
// - use default_llseek if we know we access f_pos
// - use noop_llseek if we know we don't access f_pos,
// but we still want to allow users to call lseek
//
@ open1 exists @
identifier nested_open;
@@
nested_open(...)
{
<+...
nonseekable_open(...)
...+>
}
@ open exists@
identifier open_f;
identifier i, f;
identifier open1.nested_open;
@@
int open_f(struct inode *i, struct file *f)
{
<+...
(
nonseekable_open(...)
|
nested_open(...)
)
...+>
}
@ read disable optional_qualifier exists @
identifier read_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
expression E;
identifier func;
@@
ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
{
<+...
(
*off = E
|
*off += E
|
func(..., off, ...)
|
E = *off
)
...+>
}
@ read_no_fpos disable optional_qualifier exists @
identifier read_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
@@
ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
{
... when != off
}
@ write @
identifier write_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
expression E;
identifier func;
@@
ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
{
<+...
(
*off = E
|
*off += E
|
func(..., off, ...)
|
E = *off
)
...+>
}
@ write_no_fpos @
identifier write_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
@@
ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
{
... when != off
}
@ fops0 @
identifier fops;
@@
struct file_operations fops = {
...
};
@ has_llseek depends on fops0 @
identifier fops0.fops;
identifier llseek_f;
@@
struct file_operations fops = {
...
.llseek = llseek_f,
...
};
@ has_read depends on fops0 @
identifier fops0.fops;
identifier read_f;
@@
struct file_operations fops = {
...
.read = read_f,
...
};
@ has_write depends on fops0 @
identifier fops0.fops;
identifier write_f;
@@
struct file_operations fops = {
...
.write = write_f,
...
};
@ has_open depends on fops0 @
identifier fops0.fops;
identifier open_f;
@@
struct file_operations fops = {
...
.open = open_f,
...
};
// use no_llseek if we call nonseekable_open
////////////////////////////////////////////
@ nonseekable1 depends on !has_llseek && has_open @
identifier fops0.fops;
identifier nso ~= "nonseekable_open";
@@
struct file_operations fops = {
... .open = nso, ...
+.llseek = no_llseek, /* nonseekable */
};
@ nonseekable2 depends on !has_llseek @
identifier fops0.fops;
identifier open.open_f;
@@
struct file_operations fops = {
... .open = open_f, ...
+.llseek = no_llseek, /* open uses nonseekable */
};
// use seq_lseek for sequential files
/////////////////////////////////////
@ seq depends on !has_llseek @
identifier fops0.fops;
identifier sr ~= "seq_read";
@@
struct file_operations fops = {
... .read = sr, ...
+.llseek = seq_lseek, /* we have seq_read */
};
// use default_llseek if there is a readdir
///////////////////////////////////////////
@ fops1 depends on !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier readdir_e;
@@
// any other fop is used that changes pos
struct file_operations fops = {
... .readdir = readdir_e, ...
+.llseek = default_llseek, /* readdir is present */
};
// use default_llseek if at least one of read/write touches f_pos
/////////////////////////////////////////////////////////////////
@ fops2 depends on !fops1 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read.read_f;
@@
// read fops use offset
struct file_operations fops = {
... .read = read_f, ...
+.llseek = default_llseek, /* read accesses f_pos */
};
@ fops3 depends on !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier write.write_f;
@@
// write fops use offset
struct file_operations fops = {
... .write = write_f, ...
+ .llseek = default_llseek, /* write accesses f_pos */
};
// Use noop_llseek if neither read nor write accesses f_pos
///////////////////////////////////////////////////////////
@ fops4 depends on !fops1 && !fops2 && !fops3 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read_no_fpos.read_f;
identifier write_no_fpos.write_f;
@@
// write fops use offset
struct file_operations fops = {
...
.write = write_f,
.read = read_f,
...
+.llseek = noop_llseek, /* read and write both use no f_pos */
};
@ depends on has_write && !has_read && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier write_no_fpos.write_f;
@@
struct file_operations fops = {
... .write = write_f, ...
+.llseek = noop_llseek, /* write uses no f_pos */
};
@ depends on has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read_no_fpos.read_f;
@@
struct file_operations fops = {
... .read = read_f, ...
+.llseek = noop_llseek, /* read uses no f_pos */
};
@ depends on !has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
@@
struct file_operations fops = {
...
+.llseek = noop_llseek, /* no read or write fn */
};
===== End semantic patch =====
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Cc: Julia Lawall <julia@diku.dk>
Cc: Christoph Hellwig <hch@infradead.org>
2010-08-15 20:52:59 +04:00
|
|
|
.llseek = default_llseek,
|
2006-10-02 13:17:05 +04:00
|
|
|
};
|