WSL2-Linux-Kernel/security/lsm_audit.c

473 строки
11 KiB
C
Исходник Обычный вид История

// SPDX-License-Identifier: GPL-2.0-only
/*
* common LSM auditing functions
*
* Based on code written for SELinux by :
* Stephen Smalley, <sds@tycho.nsa.gov>
* James Morris <jmorris@redhat.com>
* Author : Etienne Basset, <etienne.basset@ensta.org>
*/
#include <linux/types.h>
#include <linux/stddef.h>
#include <linux/kernel.h>
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: Tejun Heo <tj@kernel.org> Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 11:04:11 +03:00
#include <linux/gfp.h>
#include <linux/fs.h>
#include <linux/init.h>
#include <net/sock.h>
#include <linux/un.h>
#include <net/af_unix.h>
#include <linux/audit.h>
#include <linux/ipv6.h>
#include <linux/ip.h>
#include <net/ip.h>
#include <net/ipv6.h>
#include <linux/tcp.h>
#include <linux/udp.h>
#include <linux/dccp.h>
#include <linux/sctp.h>
#include <linux/lsm_audit.h>
security,lockdown,selinux: implement SELinux lockdown Implement a SELinux hook for lockdown. If the lockdown module is also enabled, then a denial by the lockdown module will take precedence over SELinux, so SELinux can only further restrict lockdown decisions. The SELinux hook only distinguishes at the granularity of integrity versus confidentiality similar to the lockdown module, but includes the full lockdown reason as part of the audit record as a hint in diagnosing what triggered the denial. To support this auditing, move the lockdown_reasons[] string array from being private to the lockdown module to the security framework so that it can be used by the lsm audit code and so that it is always available even when the lockdown module is disabled. Note that the SELinux implementation allows the integrity and confidentiality reasons to be controlled independently from one another. Thus, in an SELinux policy, one could allow operations that specify an integrity reason while blocking operations that specify a confidentiality reason. The SELinux hook implementation is stricter than the lockdown module in validating the provided reason value. Sample AVC audit output from denials: avc: denied { integrity } for pid=3402 comm="fwupd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:fwupd_t:s0 tcontext=system_u:system_r:fwupd_t:s0 tclass=lockdown permissive=0 avc: denied { confidentiality } for pid=4628 comm="cp" lockdown_reason="/proc/kcore access" scontext=unconfined_u:unconfined_r:test_lockdown_integrity_t:s0-s0:c0.c1023 tcontext=unconfined_u:unconfined_r:test_lockdown_integrity_t:s0-s0:c0.c1023 tclass=lockdown permissive=0 Signed-off-by: Stephen Smalley <sds@tycho.nsa.gov> Reviewed-by: James Morris <jamorris@linux.microsoft.com> [PM: some merge fuzz do the the perf hooks] Signed-off-by: Paul Moore <paul@paul-moore.com>
2019-11-27 20:04:36 +03:00
#include <linux/security.h>
/**
* ipv4_skb_to_auditdata : fill auditdata from skb
* @skb : the skb
* @ad : the audit data to fill
* @proto : the layer 4 protocol
*
* return 0 on success
*/
int ipv4_skb_to_auditdata(struct sk_buff *skb,
struct common_audit_data *ad, u8 *proto)
{
int ret = 0;
struct iphdr *ih;
ih = ip_hdr(skb);
if (ih == NULL)
return -EINVAL;
ad->u.net->v4info.saddr = ih->saddr;
ad->u.net->v4info.daddr = ih->daddr;
if (proto)
*proto = ih->protocol;
/* non initial fragment */
if (ntohs(ih->frag_off) & IP_OFFSET)
return 0;
switch (ih->protocol) {
case IPPROTO_TCP: {
struct tcphdr *th = tcp_hdr(skb);
if (th == NULL)
break;
ad->u.net->sport = th->source;
ad->u.net->dport = th->dest;
break;
}
case IPPROTO_UDP: {
struct udphdr *uh = udp_hdr(skb);
if (uh == NULL)
break;
ad->u.net->sport = uh->source;
ad->u.net->dport = uh->dest;
break;
}
case IPPROTO_DCCP: {
struct dccp_hdr *dh = dccp_hdr(skb);
if (dh == NULL)
break;
ad->u.net->sport = dh->dccph_sport;
ad->u.net->dport = dh->dccph_dport;
break;
}
case IPPROTO_SCTP: {
struct sctphdr *sh = sctp_hdr(skb);
if (sh == NULL)
break;
ad->u.net->sport = sh->source;
ad->u.net->dport = sh->dest;
break;
}
default:
ret = -EINVAL;
}
return ret;
}
#if IS_ENABLED(CONFIG_IPV6)
/**
* ipv6_skb_to_auditdata : fill auditdata from skb
* @skb : the skb
* @ad : the audit data to fill
* @proto : the layer 4 protocol
*
* return 0 on success
*/
int ipv6_skb_to_auditdata(struct sk_buff *skb,
struct common_audit_data *ad, u8 *proto)
{
int offset, ret = 0;
struct ipv6hdr *ip6;
u8 nexthdr;
__be16 frag_off;
ip6 = ipv6_hdr(skb);
if (ip6 == NULL)
return -EINVAL;
ad->u.net->v6info.saddr = ip6->saddr;
ad->u.net->v6info.daddr = ip6->daddr;
/* IPv6 can have several extension header before the Transport header
* skip them */
offset = skb_network_offset(skb);
offset += sizeof(*ip6);
nexthdr = ip6->nexthdr;
offset = ipv6_skip_exthdr(skb, offset, &nexthdr, &frag_off);
if (offset < 0)
return 0;
if (proto)
*proto = nexthdr;
switch (nexthdr) {
case IPPROTO_TCP: {
struct tcphdr _tcph, *th;
th = skb_header_pointer(skb, offset, sizeof(_tcph), &_tcph);
if (th == NULL)
break;
ad->u.net->sport = th->source;
ad->u.net->dport = th->dest;
break;
}
case IPPROTO_UDP: {
struct udphdr _udph, *uh;
uh = skb_header_pointer(skb, offset, sizeof(_udph), &_udph);
if (uh == NULL)
break;
ad->u.net->sport = uh->source;
ad->u.net->dport = uh->dest;
break;
}
case IPPROTO_DCCP: {
struct dccp_hdr _dccph, *dh;
dh = skb_header_pointer(skb, offset, sizeof(_dccph), &_dccph);
if (dh == NULL)
break;
ad->u.net->sport = dh->dccph_sport;
ad->u.net->dport = dh->dccph_dport;
break;
}
case IPPROTO_SCTP: {
struct sctphdr _sctph, *sh;
sh = skb_header_pointer(skb, offset, sizeof(_sctph), &_sctph);
if (sh == NULL)
break;
ad->u.net->sport = sh->source;
ad->u.net->dport = sh->dest;
break;
}
default:
ret = -EINVAL;
}
return ret;
}
#endif
static inline void print_ipv6_addr(struct audit_buffer *ab,
const struct in6_addr *addr, __be16 port,
char *name1, char *name2)
{
if (!ipv6_addr_any(addr))
audit_log_format(ab, " %s=%pI6c", name1, addr);
if (port)
audit_log_format(ab, " %s=%d", name2, ntohs(port));
}
static inline void print_ipv4_addr(struct audit_buffer *ab, __be32 addr,
__be16 port, char *name1, char *name2)
{
if (addr)
audit_log_format(ab, " %s=%pI4", name1, &addr);
if (port)
audit_log_format(ab, " %s=%d", name2, ntohs(port));
}
/**
* dump_common_audit_data - helper to dump common audit data
* @a : common audit data
*
*/
static void dump_common_audit_data(struct audit_buffer *ab,
struct common_audit_data *a)
{
char comm[sizeof(current->comm)];
/*
* To keep stack sizes in check force programers to notice if they
* start making this union too large! See struct lsm_network_audit
* as an example of how to deal with large data.
*/
BUILD_BUG_ON(sizeof(a->u) > sizeof(void *)*2);
audit_log_format(ab, " pid=%d comm=", task_tgid_nr(current));
audit_log_untrustedstring(ab, memcpy(comm, current->comm, sizeof(comm)));
switch (a->type) {
case LSM_AUDIT_DATA_NONE:
return;
case LSM_AUDIT_DATA_IPC:
audit_log_format(ab, " key=%d ", a->u.ipc_id);
break;
case LSM_AUDIT_DATA_CAP:
audit_log_format(ab, " capability=%d ", a->u.cap);
break;
case LSM_AUDIT_DATA_PATH: {
struct inode *inode;
audit_log_d_path(ab, " path=", &a->u.path);
inode = d_backing_inode(a->u.path.dentry);
if (inode) {
audit_log_format(ab, " dev=");
audit_log_untrustedstring(ab, inode->i_sb->s_id);
audit_log_format(ab, " ino=%lu", inode->i_ino);
}
break;
}
lsm,audit,selinux: Introduce a new audit data type LSM_AUDIT_DATA_FILE Right now LSM_AUDIT_DATA_PATH type contains "struct path" in union "u" of common_audit_data. This information is used to print path of file at the same time it is also used to get to dentry and inode. And this inode information is used to get to superblock and device and print device information. This does not work well for layered filesystems like overlay where dentry contained in path is overlay dentry and not the real dentry of underlying file system. That means inode retrieved from dentry is also overlay inode and not the real inode. SELinux helpers like file_path_has_perm() are doing checks on inode retrieved from file_inode(). This returns the real inode and not the overlay inode. That means we are doing check on real inode but for audit purposes we are printing details of overlay inode and that can be confusing while debugging. Hence, introduce a new type LSM_AUDIT_DATA_FILE which carries file information and inode retrieved is real inode using file_inode(). That way right avc denied information is given to user. For example, following is one example avc before the patch. type=AVC msg=audit(1473360868.399:214): avc: denied { read open } for pid=1765 comm="cat" path="/root/.../overlay/container1/merged/readfile" dev="overlay" ino=21443 scontext=unconfined_u:unconfined_r:test_overlay_client_t:s0:c10,c20 tcontext=unconfined_u:object_r:test_overlay_files_ro_t:s0 tclass=file permissive=0 It looks as follows after the patch. type=AVC msg=audit(1473360017.388:282): avc: denied { read open } for pid=2530 comm="cat" path="/root/.../overlay/container1/merged/readfile" dev="dm-0" ino=2377915 scontext=unconfined_u:unconfined_r:test_overlay_client_t:s0:c10,c20 tcontext=unconfined_u:object_r:test_overlay_files_ro_t:s0 tclass=file permissive=0 Notice that now dev information points to "dm-0" device instead of "overlay" device. This makes it clear that check failed on underlying inode and not on the overlay inode. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> [PM: slight tweaks to the description to make checkpatch.pl happy] Signed-off-by: Paul Moore <paul@paul-moore.com>
2016-09-09 18:37:49 +03:00
case LSM_AUDIT_DATA_FILE: {
struct inode *inode;
audit_log_d_path(ab, " path=", &a->u.file->f_path);
inode = file_inode(a->u.file);
if (inode) {
audit_log_format(ab, " dev=");
audit_log_untrustedstring(ab, inode->i_sb->s_id);
audit_log_format(ab, " ino=%lu", inode->i_ino);
}
break;
}
case LSM_AUDIT_DATA_IOCTL_OP: {
struct inode *inode;
audit_log_d_path(ab, " path=", &a->u.op->path);
inode = a->u.op->path.dentry->d_inode;
if (inode) {
audit_log_format(ab, " dev=");
audit_log_untrustedstring(ab, inode->i_sb->s_id);
audit_log_format(ab, " ino=%lu", inode->i_ino);
}
audit_log_format(ab, " ioctlcmd=0x%hx", a->u.op->cmd);
break;
}
case LSM_AUDIT_DATA_DENTRY: {
struct inode *inode;
audit_log_format(ab, " name=");
spin_lock(&a->u.dentry->d_lock);
audit_log_untrustedstring(ab, a->u.dentry->d_name.name);
spin_unlock(&a->u.dentry->d_lock);
inode = d_backing_inode(a->u.dentry);
if (inode) {
audit_log_format(ab, " dev=");
audit_log_untrustedstring(ab, inode->i_sb->s_id);
audit_log_format(ab, " ino=%lu", inode->i_ino);
}
break;
}
case LSM_AUDIT_DATA_INODE: {
struct dentry *dentry;
struct inode *inode;
rcu_read_lock();
inode = a->u.inode;
dentry = d_find_alias_rcu(inode);
if (dentry) {
audit_log_format(ab, " name=");
spin_lock(&dentry->d_lock);
audit_log_untrustedstring(ab, dentry->d_name.name);
spin_unlock(&dentry->d_lock);
}
audit_log_format(ab, " dev=");
audit_log_untrustedstring(ab, inode->i_sb->s_id);
audit_log_format(ab, " ino=%lu", inode->i_ino);
rcu_read_unlock();
break;
}
case LSM_AUDIT_DATA_TASK: {
struct task_struct *tsk = a->u.tsk;
if (tsk) {
pid_t pid = task_tgid_nr(tsk);
if (pid) {
char comm[sizeof(tsk->comm)];
audit_log_format(ab, " opid=%d ocomm=", pid);
audit_log_untrustedstring(ab,
memcpy(comm, tsk->comm, sizeof(comm)));
}
}
break;
}
case LSM_AUDIT_DATA_NET:
if (a->u.net->sk) {
const struct sock *sk = a->u.net->sk;
struct unix_sock *u;
missing barriers in some of unix_sock ->addr and ->path accesses Several u->addr and u->path users are not holding any locks in common with unix_bind(). unix_state_lock() is useless for those purposes. u->addr is assign-once and *(u->addr) is fully set up by the time we set u->addr (all under unix_table_lock). u->path is also set in the same critical area, also before setting u->addr, and any unix_sock with ->path filled will have non-NULL ->addr. So setting ->addr with smp_store_release() is all we need for those "lockless" users - just have them fetch ->addr with smp_load_acquire() and don't even bother looking at ->path if they see NULL ->addr. Users of ->addr and ->path fall into several classes now: 1) ones that do smp_load_acquire(u->addr) and access *(u->addr) and u->path only if smp_load_acquire() has returned non-NULL. 2) places holding unix_table_lock. These are guaranteed that *(u->addr) is seen fully initialized. If unix_sock is in one of the "bound" chains, so's ->path. 3) unix_sock_destructor() using ->addr is safe. All places that set u->addr are guaranteed to have seen all stores *(u->addr) while holding a reference to u and unix_sock_destructor() is called when (atomic) refcount hits zero. 4) unix_release_sock() using ->path is safe. unix_bind() is serialized wrt unix_release() (normally - by struct file refcount), and for the instances that had ->path set by unix_bind() unix_release_sock() comes from unix_release(), so they are fine. Instances that had it set in unix_stream_connect() either end up attached to a socket (in unix_accept()), in which case the call chain to unix_release_sock() and serialization are the same as in the previous case, or they never get accept'ed and unix_release_sock() is called when the listener is shut down and its queue gets purged. In that case the listener's queue lock provides the barriers needed - unix_stream_connect() shoves our unix_sock into listener's queue under that lock right after having set ->path and eventual unix_release_sock() caller picks them from that queue under the same lock right before calling unix_release_sock(). 5) unix_find_other() use of ->path is pointless, but safe - it happens with successful lookup by (abstract) name, so ->path.dentry is guaranteed to be NULL there. earlier-variant-reviewed-by: "Paul E. McKenney" <paulmck@linux.ibm.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-15 23:09:35 +03:00
struct unix_address *addr;
int len = 0;
char *p = NULL;
switch (sk->sk_family) {
case AF_INET: {
struct inet_sock *inet = inet_sk(sk);
print_ipv4_addr(ab, inet->inet_rcv_saddr,
inet->inet_sport,
"laddr", "lport");
print_ipv4_addr(ab, inet->inet_daddr,
inet->inet_dport,
"faddr", "fport");
break;
}
#if IS_ENABLED(CONFIG_IPV6)
case AF_INET6: {
struct inet_sock *inet = inet_sk(sk);
print_ipv6_addr(ab, &sk->sk_v6_rcv_saddr,
inet->inet_sport,
"laddr", "lport");
print_ipv6_addr(ab, &sk->sk_v6_daddr,
inet->inet_dport,
"faddr", "fport");
break;
}
#endif
case AF_UNIX:
u = unix_sk(sk);
missing barriers in some of unix_sock ->addr and ->path accesses Several u->addr and u->path users are not holding any locks in common with unix_bind(). unix_state_lock() is useless for those purposes. u->addr is assign-once and *(u->addr) is fully set up by the time we set u->addr (all under unix_table_lock). u->path is also set in the same critical area, also before setting u->addr, and any unix_sock with ->path filled will have non-NULL ->addr. So setting ->addr with smp_store_release() is all we need for those "lockless" users - just have them fetch ->addr with smp_load_acquire() and don't even bother looking at ->path if they see NULL ->addr. Users of ->addr and ->path fall into several classes now: 1) ones that do smp_load_acquire(u->addr) and access *(u->addr) and u->path only if smp_load_acquire() has returned non-NULL. 2) places holding unix_table_lock. These are guaranteed that *(u->addr) is seen fully initialized. If unix_sock is in one of the "bound" chains, so's ->path. 3) unix_sock_destructor() using ->addr is safe. All places that set u->addr are guaranteed to have seen all stores *(u->addr) while holding a reference to u and unix_sock_destructor() is called when (atomic) refcount hits zero. 4) unix_release_sock() using ->path is safe. unix_bind() is serialized wrt unix_release() (normally - by struct file refcount), and for the instances that had ->path set by unix_bind() unix_release_sock() comes from unix_release(), so they are fine. Instances that had it set in unix_stream_connect() either end up attached to a socket (in unix_accept()), in which case the call chain to unix_release_sock() and serialization are the same as in the previous case, or they never get accept'ed and unix_release_sock() is called when the listener is shut down and its queue gets purged. In that case the listener's queue lock provides the barriers needed - unix_stream_connect() shoves our unix_sock into listener's queue under that lock right after having set ->path and eventual unix_release_sock() caller picks them from that queue under the same lock right before calling unix_release_sock(). 5) unix_find_other() use of ->path is pointless, but safe - it happens with successful lookup by (abstract) name, so ->path.dentry is guaranteed to be NULL there. earlier-variant-reviewed-by: "Paul E. McKenney" <paulmck@linux.ibm.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-15 23:09:35 +03:00
addr = smp_load_acquire(&u->addr);
if (!addr)
break;
if (u->path.dentry) {
audit_log_d_path(ab, " path=", &u->path);
break;
}
missing barriers in some of unix_sock ->addr and ->path accesses Several u->addr and u->path users are not holding any locks in common with unix_bind(). unix_state_lock() is useless for those purposes. u->addr is assign-once and *(u->addr) is fully set up by the time we set u->addr (all under unix_table_lock). u->path is also set in the same critical area, also before setting u->addr, and any unix_sock with ->path filled will have non-NULL ->addr. So setting ->addr with smp_store_release() is all we need for those "lockless" users - just have them fetch ->addr with smp_load_acquire() and don't even bother looking at ->path if they see NULL ->addr. Users of ->addr and ->path fall into several classes now: 1) ones that do smp_load_acquire(u->addr) and access *(u->addr) and u->path only if smp_load_acquire() has returned non-NULL. 2) places holding unix_table_lock. These are guaranteed that *(u->addr) is seen fully initialized. If unix_sock is in one of the "bound" chains, so's ->path. 3) unix_sock_destructor() using ->addr is safe. All places that set u->addr are guaranteed to have seen all stores *(u->addr) while holding a reference to u and unix_sock_destructor() is called when (atomic) refcount hits zero. 4) unix_release_sock() using ->path is safe. unix_bind() is serialized wrt unix_release() (normally - by struct file refcount), and for the instances that had ->path set by unix_bind() unix_release_sock() comes from unix_release(), so they are fine. Instances that had it set in unix_stream_connect() either end up attached to a socket (in unix_accept()), in which case the call chain to unix_release_sock() and serialization are the same as in the previous case, or they never get accept'ed and unix_release_sock() is called when the listener is shut down and its queue gets purged. In that case the listener's queue lock provides the barriers needed - unix_stream_connect() shoves our unix_sock into listener's queue under that lock right after having set ->path and eventual unix_release_sock() caller picks them from that queue under the same lock right before calling unix_release_sock(). 5) unix_find_other() use of ->path is pointless, but safe - it happens with successful lookup by (abstract) name, so ->path.dentry is guaranteed to be NULL there. earlier-variant-reviewed-by: "Paul E. McKenney" <paulmck@linux.ibm.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-15 23:09:35 +03:00
len = addr->len-sizeof(short);
p = &addr->name->sun_path[0];
audit_log_format(ab, " path=");
if (*p)
audit_log_untrustedstring(ab, p);
else
audit_log_n_hex(ab, p, len);
break;
}
}
switch (a->u.net->family) {
case AF_INET:
print_ipv4_addr(ab, a->u.net->v4info.saddr,
a->u.net->sport,
"saddr", "src");
print_ipv4_addr(ab, a->u.net->v4info.daddr,
a->u.net->dport,
"daddr", "dest");
break;
case AF_INET6:
print_ipv6_addr(ab, &a->u.net->v6info.saddr,
a->u.net->sport,
"saddr", "src");
print_ipv6_addr(ab, &a->u.net->v6info.daddr,
a->u.net->dport,
"daddr", "dest");
break;
}
if (a->u.net->netif > 0) {
struct net_device *dev;
/* NOTE: we always use init's namespace */
dev = dev_get_by_index(&init_net, a->u.net->netif);
if (dev) {
audit_log_format(ab, " netif=%s", dev->name);
dev_put(dev);
}
}
break;
#ifdef CONFIG_KEYS
case LSM_AUDIT_DATA_KEY:
audit_log_format(ab, " key_serial=%u", a->u.key_struct.key);
if (a->u.key_struct.key_desc) {
audit_log_format(ab, " key_desc=");
audit_log_untrustedstring(ab, a->u.key_struct.key_desc);
}
break;
#endif
case LSM_AUDIT_DATA_KMOD:
audit_log_format(ab, " kmod=");
audit_log_untrustedstring(ab, a->u.kmod_name);
break;
case LSM_AUDIT_DATA_IBPKEY: {
struct in6_addr sbn_pfx;
memset(&sbn_pfx.s6_addr, 0,
sizeof(sbn_pfx.s6_addr));
memcpy(&sbn_pfx.s6_addr, &a->u.ibpkey->subnet_prefix,
sizeof(a->u.ibpkey->subnet_prefix));
audit_log_format(ab, " pkey=0x%x subnet_prefix=%pI6c",
a->u.ibpkey->pkey, &sbn_pfx);
break;
}
case LSM_AUDIT_DATA_IBENDPORT:
audit_log_format(ab, " device=%s port_num=%u",
a->u.ibendport->dev_name,
a->u.ibendport->port);
break;
security,lockdown,selinux: implement SELinux lockdown Implement a SELinux hook for lockdown. If the lockdown module is also enabled, then a denial by the lockdown module will take precedence over SELinux, so SELinux can only further restrict lockdown decisions. The SELinux hook only distinguishes at the granularity of integrity versus confidentiality similar to the lockdown module, but includes the full lockdown reason as part of the audit record as a hint in diagnosing what triggered the denial. To support this auditing, move the lockdown_reasons[] string array from being private to the lockdown module to the security framework so that it can be used by the lsm audit code and so that it is always available even when the lockdown module is disabled. Note that the SELinux implementation allows the integrity and confidentiality reasons to be controlled independently from one another. Thus, in an SELinux policy, one could allow operations that specify an integrity reason while blocking operations that specify a confidentiality reason. The SELinux hook implementation is stricter than the lockdown module in validating the provided reason value. Sample AVC audit output from denials: avc: denied { integrity } for pid=3402 comm="fwupd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:fwupd_t:s0 tcontext=system_u:system_r:fwupd_t:s0 tclass=lockdown permissive=0 avc: denied { confidentiality } for pid=4628 comm="cp" lockdown_reason="/proc/kcore access" scontext=unconfined_u:unconfined_r:test_lockdown_integrity_t:s0-s0:c0.c1023 tcontext=unconfined_u:unconfined_r:test_lockdown_integrity_t:s0-s0:c0.c1023 tclass=lockdown permissive=0 Signed-off-by: Stephen Smalley <sds@tycho.nsa.gov> Reviewed-by: James Morris <jamorris@linux.microsoft.com> [PM: some merge fuzz do the the perf hooks] Signed-off-by: Paul Moore <paul@paul-moore.com>
2019-11-27 20:04:36 +03:00
case LSM_AUDIT_DATA_LOCKDOWN:
audit_log_format(ab, " lockdown_reason=\"%s\"",
lockdown_reasons[a->u.reason]);
security,lockdown,selinux: implement SELinux lockdown Implement a SELinux hook for lockdown. If the lockdown module is also enabled, then a denial by the lockdown module will take precedence over SELinux, so SELinux can only further restrict lockdown decisions. The SELinux hook only distinguishes at the granularity of integrity versus confidentiality similar to the lockdown module, but includes the full lockdown reason as part of the audit record as a hint in diagnosing what triggered the denial. To support this auditing, move the lockdown_reasons[] string array from being private to the lockdown module to the security framework so that it can be used by the lsm audit code and so that it is always available even when the lockdown module is disabled. Note that the SELinux implementation allows the integrity and confidentiality reasons to be controlled independently from one another. Thus, in an SELinux policy, one could allow operations that specify an integrity reason while blocking operations that specify a confidentiality reason. The SELinux hook implementation is stricter than the lockdown module in validating the provided reason value. Sample AVC audit output from denials: avc: denied { integrity } for pid=3402 comm="fwupd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:fwupd_t:s0 tcontext=system_u:system_r:fwupd_t:s0 tclass=lockdown permissive=0 avc: denied { confidentiality } for pid=4628 comm="cp" lockdown_reason="/proc/kcore access" scontext=unconfined_u:unconfined_r:test_lockdown_integrity_t:s0-s0:c0.c1023 tcontext=unconfined_u:unconfined_r:test_lockdown_integrity_t:s0-s0:c0.c1023 tclass=lockdown permissive=0 Signed-off-by: Stephen Smalley <sds@tycho.nsa.gov> Reviewed-by: James Morris <jamorris@linux.microsoft.com> [PM: some merge fuzz do the the perf hooks] Signed-off-by: Paul Moore <paul@paul-moore.com>
2019-11-27 20:04:36 +03:00
break;
} /* switch (a->type) */
}
/**
* common_lsm_audit - generic LSM auditing function
* @a: auxiliary audit data
* @pre_audit: lsm-specific pre-audit callback
* @post_audit: lsm-specific post-audit callback
*
* setup the audit buffer for common security information
* uses callback to print LSM specific information
*/
void common_lsm_audit(struct common_audit_data *a,
void (*pre_audit)(struct audit_buffer *, void *),
void (*post_audit)(struct audit_buffer *, void *))
{
struct audit_buffer *ab;
if (a == NULL)
return;
/* we use GFP_ATOMIC so we won't sleep */
ab = audit_log_start(audit_context(), GFP_ATOMIC | __GFP_NOWARN,
AUDIT_AVC);
if (ab == NULL)
return;
if (pre_audit)
pre_audit(ab, a);
dump_common_audit_data(ab, a);
if (post_audit)
post_audit(ab, a);
audit_log_end(ab);
}