Merge branch 'support-global-data'

Daniel Borkmann says:

====================
This series is a major rework of previously submitted libbpf
patches [0] in order to add global data support for BPF. The
kernel has been extended to add proper infrastructure that allows
for full .bss/.data/.rodata sections on BPF loader side based
upon feedback from LPC discussions [1]. Latter support is then
also added into libbpf in this series which allows for more
natural C-like programming of BPF programs. For more information
on loader, please refer to 'bpf, libbpf: support global data/bss/
rodata sections' patch in this series.

Thanks a lot!

  v5 -> v6:
   - Removed synchronize_rcu() from map freeze (Jann)
   - Rest as-is
  v4 -> v5:
   - Removed index selection again for ldimm64 (Alexei)
   - Adapted related test cases and added new ones to test
     rejection of off != 0
  v3 -> v4:
   - Various fixes in BTF verification e.g. to disallow
     Var and DataSec to be an intermediate type during resolve (Martin)
   - More BTF test cases added
   - Few cleanups in key-less BTF commit (Martin)
   - Bump libbpf minor version from 2 to 3
   - Renamed and simplified read-only locking
   - Various minor improvements all over the place
  v2 -> v3:
   - Implement BTF support in kernel, libbpf, bpftool, add tests
   - Fix idx + off conversion (Andrii)
   - Document lower / higher bits for direct value access (Andrii)
   - Add tests with small value size (Andrii)
   - Add index selection into ldimm64 (Andrii)
   - Fix missing fdput() (Jann)
   - Reject invalid flags in BPF_F_*_PROG (Jakub)
   - Complete rework of libbpf support, includes:
    - Add objname to map name (Stanislav)
    - Make .rodata map full read-only after setup (Andrii)
    - Merge relocation handling into single one (Andrii)
    - Store global maps into obj->maps array (Andrii, Alexei)
    - Debug message when skipping section (Andrii)
    - Reject non-static global data till we have
      semantics for sharing them (Yonghong, Andrii, Alexei)
    - More test cases and completely reworked prog test (Alexei)
   - Fixes, cleanups, etc all over the set
   - Not yet addressed:
    - Make BTF mandatory for these maps (Alexei)
    -> Waiting till BTF support for these lands first
  v1 -> v2:
    - Instead of 32-bit static data, implement full global
      data support (Alexei)

  [0] https://patchwork.ozlabs.org/cover/1040290/
  [1] http://vger.kernel.org/lpc-bpf2018.html#session-3
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
This commit is contained in:
Alexei Starovoitov 2019-04-09 17:05:48 -07:00
Родитель ff466b5805 c861168b7c
Коммит 6316f78306
37 изменённых файлов: 3018 добавлений и 207 удалений

Просмотреть файл

@ -82,6 +82,8 @@ sequentially and type id is assigned to each recognized type starting from id
#define BTF_KIND_RESTRICT 11 /* Restrict */ #define BTF_KIND_RESTRICT 11 /* Restrict */
#define BTF_KIND_FUNC 12 /* Function */ #define BTF_KIND_FUNC 12 /* Function */
#define BTF_KIND_FUNC_PROTO 13 /* Function Proto */ #define BTF_KIND_FUNC_PROTO 13 /* Function Proto */
#define BTF_KIND_VAR 14 /* Variable */
#define BTF_KIND_DATASEC 15 /* Section */
Note that the type section encodes debug info, not just pure types. Note that the type section encodes debug info, not just pure types.
``BTF_KIND_FUNC`` is not a type, and it represents a defined subprogram. ``BTF_KIND_FUNC`` is not a type, and it represents a defined subprogram.
@ -393,6 +395,61 @@ refers to parameter type.
If the function has variable arguments, the last parameter is encoded with If the function has variable arguments, the last parameter is encoded with
``name_off = 0`` and ``type = 0``. ``name_off = 0`` and ``type = 0``.
2.2.14 BTF_KIND_VAR
~~~~~~~~~~~~~~~~~~~
``struct btf_type`` encoding requirement:
* ``name_off``: offset to a valid C identifier
* ``info.kind_flag``: 0
* ``info.kind``: BTF_KIND_VAR
* ``info.vlen``: 0
* ``type``: the type of the variable
``btf_type`` is followed by a single ``struct btf_variable`` with the
following data::
struct btf_var {
__u32 linkage;
};
``struct btf_var`` encoding:
* ``linkage``: currently only static variable 0, or globally allocated
variable in ELF sections 1
Not all type of global variables are supported by LLVM at this point.
The following is currently available:
* static variables with or without section attributes
* global variables with section attributes
The latter is for future extraction of map key/value type id's from a
map definition.
2.2.15 BTF_KIND_DATASEC
~~~~~~~~~~~~~~~~~~~~~~~
``struct btf_type`` encoding requirement:
* ``name_off``: offset to a valid name associated with a variable or
one of .data/.bss/.rodata
* ``info.kind_flag``: 0
* ``info.kind``: BTF_KIND_DATASEC
* ``info.vlen``: # of variables
* ``size``: total section size in bytes (0 at compilation time, patched
to actual size by BPF loaders such as libbpf)
``btf_type`` is followed by ``info.vlen`` number of ``struct btf_var_secinfo``.::
struct btf_var_secinfo {
__u32 type;
__u32 offset;
__u32 size;
};
``struct btf_var_secinfo`` encoding:
* ``type``: the type of the BTF_KIND_VAR variable
* ``offset``: the in-section offset of the variable
* ``size``: the size of the variable in bytes
3. BTF Kernel API 3. BTF Kernel API
***************** *****************

Просмотреть файл

@ -57,6 +57,12 @@ struct bpf_map_ops {
const struct btf *btf, const struct btf *btf,
const struct btf_type *key_type, const struct btf_type *key_type,
const struct btf_type *value_type); const struct btf_type *value_type);
/* Direct value access helpers. */
int (*map_direct_value_addr)(const struct bpf_map *map,
u64 *imm, u32 off);
int (*map_direct_value_meta)(const struct bpf_map *map,
u64 imm, u32 *off);
}; };
struct bpf_map { struct bpf_map {
@ -81,7 +87,8 @@ struct bpf_map {
struct btf *btf; struct btf *btf;
u32 pages; u32 pages;
bool unpriv_array; bool unpriv_array;
/* 51 bytes hole */ bool frozen; /* write-once */
/* 48 bytes hole */
/* The 3rd and 4th cacheline with misc members to avoid false sharing /* The 3rd and 4th cacheline with misc members to avoid false sharing
* particularly with refcounting. * particularly with refcounting.
@ -424,6 +431,35 @@ struct bpf_array {
#define BPF_COMPLEXITY_LIMIT_INSNS 1000000 /* yes. 1M insns */ #define BPF_COMPLEXITY_LIMIT_INSNS 1000000 /* yes. 1M insns */
#define MAX_TAIL_CALL_CNT 32 #define MAX_TAIL_CALL_CNT 32
#define BPF_F_ACCESS_MASK (BPF_F_RDONLY | \
BPF_F_RDONLY_PROG | \
BPF_F_WRONLY | \
BPF_F_WRONLY_PROG)
#define BPF_MAP_CAN_READ BIT(0)
#define BPF_MAP_CAN_WRITE BIT(1)
static inline u32 bpf_map_flags_to_cap(struct bpf_map *map)
{
u32 access_flags = map->map_flags & (BPF_F_RDONLY_PROG | BPF_F_WRONLY_PROG);
/* Combination of BPF_F_RDONLY_PROG | BPF_F_WRONLY_PROG is
* not possible.
*/
if (access_flags & BPF_F_RDONLY_PROG)
return BPF_MAP_CAN_READ;
else if (access_flags & BPF_F_WRONLY_PROG)
return BPF_MAP_CAN_WRITE;
else
return BPF_MAP_CAN_READ | BPF_MAP_CAN_WRITE;
}
static inline bool bpf_map_flags_access_ok(u32 access_flags)
{
return (access_flags & (BPF_F_RDONLY_PROG | BPF_F_WRONLY_PROG)) !=
(BPF_F_RDONLY_PROG | BPF_F_WRONLY_PROG);
}
struct bpf_event_entry { struct bpf_event_entry {
struct perf_event *event; struct perf_event *event;
struct file *perf_file; struct file *perf_file;

Просмотреть файл

@ -224,6 +224,10 @@ struct bpf_insn_aux_data {
unsigned long map_state; /* pointer/poison value for maps */ unsigned long map_state; /* pointer/poison value for maps */
s32 call_imm; /* saved imm field of call insn */ s32 call_imm; /* saved imm field of call insn */
u32 alu_limit; /* limit for add/sub register with pointer */ u32 alu_limit; /* limit for add/sub register with pointer */
struct {
u32 map_index; /* index into used_maps[] */
u32 map_off; /* offset from value base address */
};
}; };
int ctx_field_size; /* the ctx field size for load insn, maybe 0 */ int ctx_field_size; /* the ctx field size for load insn, maybe 0 */
int sanitize_stack_off; /* stack slot to be cleared */ int sanitize_stack_off; /* stack slot to be cleared */

Просмотреть файл

@ -51,6 +51,7 @@ bool btf_member_is_reg_int(const struct btf *btf, const struct btf_type *s,
const struct btf_member *m, const struct btf_member *m,
u32 expected_offset, u32 expected_size); u32 expected_offset, u32 expected_size);
int btf_find_spin_lock(const struct btf *btf, const struct btf_type *t); int btf_find_spin_lock(const struct btf *btf, const struct btf_type *t);
bool btf_type_is_void(const struct btf_type *t);
#ifdef CONFIG_BPF_SYSCALL #ifdef CONFIG_BPF_SYSCALL
const struct btf_type *btf_type_by_id(const struct btf *btf, u32 type_id); const struct btf_type *btf_type_by_id(const struct btf *btf, u32 type_id);

Просмотреть файл

@ -105,6 +105,7 @@ enum bpf_cmd {
BPF_BTF_GET_FD_BY_ID, BPF_BTF_GET_FD_BY_ID,
BPF_TASK_FD_QUERY, BPF_TASK_FD_QUERY,
BPF_MAP_LOOKUP_AND_DELETE_ELEM, BPF_MAP_LOOKUP_AND_DELETE_ELEM,
BPF_MAP_FREEZE,
}; };
enum bpf_map_type { enum bpf_map_type {
@ -255,8 +256,19 @@ enum bpf_attach_type {
*/ */
#define BPF_F_ANY_ALIGNMENT (1U << 1) #define BPF_F_ANY_ALIGNMENT (1U << 1)
/* when bpf_ldimm64->src_reg == BPF_PSEUDO_MAP_FD, bpf_ldimm64->imm == fd */ /* When BPF ldimm64's insn[0].src_reg != 0 then this can have
* two extensions:
*
* insn[0].src_reg: BPF_PSEUDO_MAP_FD BPF_PSEUDO_MAP_VALUE
* insn[0].imm: map fd map fd
* insn[1].imm: 0 offset into value
* insn[0].off: 0 0
* insn[1].off: 0 0
* ldimm64 rewrite: address of map address of map[0]+offset
* verifier type: CONST_PTR_TO_MAP PTR_TO_MAP_VALUE
*/
#define BPF_PSEUDO_MAP_FD 1 #define BPF_PSEUDO_MAP_FD 1
#define BPF_PSEUDO_MAP_VALUE 2
/* when bpf_call->src_reg == BPF_PSEUDO_CALL, bpf_call->imm == pc-relative /* when bpf_call->src_reg == BPF_PSEUDO_CALL, bpf_call->imm == pc-relative
* offset to another bpf function * offset to another bpf function
@ -283,7 +295,7 @@ enum bpf_attach_type {
#define BPF_OBJ_NAME_LEN 16U #define BPF_OBJ_NAME_LEN 16U
/* Flags for accessing BPF object */ /* Flags for accessing BPF object from syscall side. */
#define BPF_F_RDONLY (1U << 3) #define BPF_F_RDONLY (1U << 3)
#define BPF_F_WRONLY (1U << 4) #define BPF_F_WRONLY (1U << 4)
@ -293,6 +305,10 @@ enum bpf_attach_type {
/* Zero-initialize hash function seed. This should only be used for testing. */ /* Zero-initialize hash function seed. This should only be used for testing. */
#define BPF_F_ZERO_SEED (1U << 6) #define BPF_F_ZERO_SEED (1U << 6)
/* Flags for accessing BPF object from program side. */
#define BPF_F_RDONLY_PROG (1U << 7)
#define BPF_F_WRONLY_PROG (1U << 8)
/* flags for BPF_PROG_QUERY */ /* flags for BPF_PROG_QUERY */
#define BPF_F_QUERY_EFFECTIVE (1U << 0) #define BPF_F_QUERY_EFFECTIVE (1U << 0)

Просмотреть файл

@ -39,11 +39,11 @@ struct btf_type {
* struct, union and fwd * struct, union and fwd
*/ */
__u32 info; __u32 info;
/* "size" is used by INT, ENUM, STRUCT and UNION. /* "size" is used by INT, ENUM, STRUCT, UNION and DATASEC.
* "size" tells the size of the type it is describing. * "size" tells the size of the type it is describing.
* *
* "type" is used by PTR, TYPEDEF, VOLATILE, CONST, RESTRICT, * "type" is used by PTR, TYPEDEF, VOLATILE, CONST, RESTRICT,
* FUNC and FUNC_PROTO. * FUNC, FUNC_PROTO and VAR.
* "type" is a type_id referring to another type. * "type" is a type_id referring to another type.
*/ */
union { union {
@ -70,8 +70,10 @@ struct btf_type {
#define BTF_KIND_RESTRICT 11 /* Restrict */ #define BTF_KIND_RESTRICT 11 /* Restrict */
#define BTF_KIND_FUNC 12 /* Function */ #define BTF_KIND_FUNC 12 /* Function */
#define BTF_KIND_FUNC_PROTO 13 /* Function Proto */ #define BTF_KIND_FUNC_PROTO 13 /* Function Proto */
#define BTF_KIND_MAX 13 #define BTF_KIND_VAR 14 /* Variable */
#define NR_BTF_KINDS 14 #define BTF_KIND_DATASEC 15 /* Section */
#define BTF_KIND_MAX BTF_KIND_DATASEC
#define NR_BTF_KINDS (BTF_KIND_MAX + 1)
/* For some specific BTF_KIND, "struct btf_type" is immediately /* For some specific BTF_KIND, "struct btf_type" is immediately
* followed by extra data. * followed by extra data.
@ -138,4 +140,26 @@ struct btf_param {
__u32 type; __u32 type;
}; };
enum {
BTF_VAR_STATIC = 0,
BTF_VAR_GLOBAL_ALLOCATED,
};
/* BTF_KIND_VAR is followed by a single "struct btf_var" to describe
* additional information related to the variable such as its linkage.
*/
struct btf_var {
__u32 linkage;
};
/* BTF_KIND_DATASEC is followed by multiple "struct btf_var_secinfo"
* to describe all BTF_KIND_VAR types it contains along with it's
* in-section offset as well as size.
*/
struct btf_var_secinfo {
__u32 type;
__u32 offset;
__u32 size;
};
#endif /* _UAPI__LINUX_BTF_H__ */ #endif /* _UAPI__LINUX_BTF_H__ */

Просмотреть файл

@ -22,7 +22,7 @@
#include "map_in_map.h" #include "map_in_map.h"
#define ARRAY_CREATE_FLAG_MASK \ #define ARRAY_CREATE_FLAG_MASK \
(BPF_F_NUMA_NODE | BPF_F_RDONLY | BPF_F_WRONLY) (BPF_F_NUMA_NODE | BPF_F_ACCESS_MASK)
static void bpf_array_free_percpu(struct bpf_array *array) static void bpf_array_free_percpu(struct bpf_array *array)
{ {
@ -63,6 +63,7 @@ int array_map_alloc_check(union bpf_attr *attr)
if (attr->max_entries == 0 || attr->key_size != 4 || if (attr->max_entries == 0 || attr->key_size != 4 ||
attr->value_size == 0 || attr->value_size == 0 ||
attr->map_flags & ~ARRAY_CREATE_FLAG_MASK || attr->map_flags & ~ARRAY_CREATE_FLAG_MASK ||
!bpf_map_flags_access_ok(attr->map_flags) ||
(percpu && numa_node != NUMA_NO_NODE)) (percpu && numa_node != NUMA_NO_NODE))
return -EINVAL; return -EINVAL;
@ -160,6 +161,36 @@ static void *array_map_lookup_elem(struct bpf_map *map, void *key)
return array->value + array->elem_size * (index & array->index_mask); return array->value + array->elem_size * (index & array->index_mask);
} }
static int array_map_direct_value_addr(const struct bpf_map *map, u64 *imm,
u32 off)
{
struct bpf_array *array = container_of(map, struct bpf_array, map);
if (map->max_entries != 1)
return -ENOTSUPP;
if (off >= map->value_size)
return -EINVAL;
*imm = (unsigned long)array->value;
return 0;
}
static int array_map_direct_value_meta(const struct bpf_map *map, u64 imm,
u32 *off)
{
struct bpf_array *array = container_of(map, struct bpf_array, map);
u64 base = (unsigned long)array->value;
u64 range = array->elem_size;
if (map->max_entries != 1)
return -ENOTSUPP;
if (imm < base || imm >= base + range)
return -ENOENT;
*off = imm - base;
return 0;
}
/* emit BPF instructions equivalent to C code of array_map_lookup_elem() */ /* emit BPF instructions equivalent to C code of array_map_lookup_elem() */
static u32 array_map_gen_lookup(struct bpf_map *map, struct bpf_insn *insn_buf) static u32 array_map_gen_lookup(struct bpf_map *map, struct bpf_insn *insn_buf)
{ {
@ -360,7 +391,8 @@ static void array_map_seq_show_elem(struct bpf_map *map, void *key,
return; return;
} }
seq_printf(m, "%u: ", *(u32 *)key); if (map->btf_key_type_id)
seq_printf(m, "%u: ", *(u32 *)key);
btf_type_seq_show(map->btf, map->btf_value_type_id, value, m); btf_type_seq_show(map->btf, map->btf_value_type_id, value, m);
seq_puts(m, "\n"); seq_puts(m, "\n");
@ -397,6 +429,18 @@ static int array_map_check_btf(const struct bpf_map *map,
{ {
u32 int_data; u32 int_data;
/* One exception for keyless BTF: .bss/.data/.rodata map */
if (btf_type_is_void(key_type)) {
if (map->map_type != BPF_MAP_TYPE_ARRAY ||
map->max_entries != 1)
return -EINVAL;
if (BTF_INFO_KIND(value_type->info) != BTF_KIND_DATASEC)
return -EINVAL;
return 0;
}
if (BTF_INFO_KIND(key_type->info) != BTF_KIND_INT) if (BTF_INFO_KIND(key_type->info) != BTF_KIND_INT)
return -EINVAL; return -EINVAL;
@ -419,6 +463,8 @@ const struct bpf_map_ops array_map_ops = {
.map_update_elem = array_map_update_elem, .map_update_elem = array_map_update_elem,
.map_delete_elem = array_map_delete_elem, .map_delete_elem = array_map_delete_elem,
.map_gen_lookup = array_map_gen_lookup, .map_gen_lookup = array_map_gen_lookup,
.map_direct_value_addr = array_map_direct_value_addr,
.map_direct_value_meta = array_map_direct_value_meta,
.map_seq_show_elem = array_map_seq_show_elem, .map_seq_show_elem = array_map_seq_show_elem,
.map_check_btf = array_map_check_btf, .map_check_btf = array_map_check_btf,
}; };
@ -440,6 +486,9 @@ static int fd_array_map_alloc_check(union bpf_attr *attr)
/* only file descriptors can be stored in this type of map */ /* only file descriptors can be stored in this type of map */
if (attr->value_size != sizeof(u32)) if (attr->value_size != sizeof(u32))
return -EINVAL; return -EINVAL;
/* Program read-only/write-only not supported for special maps yet. */
if (attr->map_flags & (BPF_F_RDONLY_PROG | BPF_F_WRONLY_PROG))
return -EINVAL;
return array_map_alloc_check(attr); return array_map_alloc_check(attr);
} }

Просмотреть файл

@ -185,6 +185,16 @@
i < btf_type_vlen(struct_type); \ i < btf_type_vlen(struct_type); \
i++, member++) i++, member++)
#define for_each_vsi(i, struct_type, member) \
for (i = 0, member = btf_type_var_secinfo(struct_type); \
i < btf_type_vlen(struct_type); \
i++, member++)
#define for_each_vsi_from(i, from, struct_type, member) \
for (i = from, member = btf_type_var_secinfo(struct_type) + from; \
i < btf_type_vlen(struct_type); \
i++, member++)
static DEFINE_IDR(btf_idr); static DEFINE_IDR(btf_idr);
static DEFINE_SPINLOCK(btf_idr_lock); static DEFINE_SPINLOCK(btf_idr_lock);
@ -262,6 +272,8 @@ static const char * const btf_kind_str[NR_BTF_KINDS] = {
[BTF_KIND_RESTRICT] = "RESTRICT", [BTF_KIND_RESTRICT] = "RESTRICT",
[BTF_KIND_FUNC] = "FUNC", [BTF_KIND_FUNC] = "FUNC",
[BTF_KIND_FUNC_PROTO] = "FUNC_PROTO", [BTF_KIND_FUNC_PROTO] = "FUNC_PROTO",
[BTF_KIND_VAR] = "VAR",
[BTF_KIND_DATASEC] = "DATASEC",
}; };
struct btf_kind_operations { struct btf_kind_operations {
@ -314,7 +326,7 @@ static bool btf_type_is_modifier(const struct btf_type *t)
return false; return false;
} }
static bool btf_type_is_void(const struct btf_type *t) bool btf_type_is_void(const struct btf_type *t)
{ {
return t == &btf_void; return t == &btf_void;
} }
@ -375,13 +387,36 @@ static bool btf_type_is_int(const struct btf_type *t)
return BTF_INFO_KIND(t->info) == BTF_KIND_INT; return BTF_INFO_KIND(t->info) == BTF_KIND_INT;
} }
static bool btf_type_is_var(const struct btf_type *t)
{
return BTF_INFO_KIND(t->info) == BTF_KIND_VAR;
}
static bool btf_type_is_datasec(const struct btf_type *t)
{
return BTF_INFO_KIND(t->info) == BTF_KIND_DATASEC;
}
/* Types that act only as a source, not sink or intermediate
* type when resolving.
*/
static bool btf_type_is_resolve_source_only(const struct btf_type *t)
{
return btf_type_is_var(t) ||
btf_type_is_datasec(t);
}
/* What types need to be resolved? /* What types need to be resolved?
* *
* btf_type_is_modifier() is an obvious one. * btf_type_is_modifier() is an obvious one.
* *
* btf_type_is_struct() because its member refers to * btf_type_is_struct() because its member refers to
* another type (through member->type). * another type (through member->type).
*
* btf_type_is_var() because the variable refers to
* another type. btf_type_is_datasec() holds multiple
* btf_type_is_var() types that need resolving.
*
* btf_type_is_array() because its element (array->type) * btf_type_is_array() because its element (array->type)
* refers to another type. Array can be thought of a * refers to another type. Array can be thought of a
* special case of struct while array just has the same * special case of struct while array just has the same
@ -390,9 +425,11 @@ static bool btf_type_is_int(const struct btf_type *t)
static bool btf_type_needs_resolve(const struct btf_type *t) static bool btf_type_needs_resolve(const struct btf_type *t)
{ {
return btf_type_is_modifier(t) || return btf_type_is_modifier(t) ||
btf_type_is_ptr(t) || btf_type_is_ptr(t) ||
btf_type_is_struct(t) || btf_type_is_struct(t) ||
btf_type_is_array(t); btf_type_is_array(t) ||
btf_type_is_var(t) ||
btf_type_is_datasec(t);
} }
/* t->size can be used */ /* t->size can be used */
@ -403,6 +440,7 @@ static bool btf_type_has_size(const struct btf_type *t)
case BTF_KIND_STRUCT: case BTF_KIND_STRUCT:
case BTF_KIND_UNION: case BTF_KIND_UNION:
case BTF_KIND_ENUM: case BTF_KIND_ENUM:
case BTF_KIND_DATASEC:
return true; return true;
} }
@ -467,6 +505,16 @@ static const struct btf_enum *btf_type_enum(const struct btf_type *t)
return (const struct btf_enum *)(t + 1); return (const struct btf_enum *)(t + 1);
} }
static const struct btf_var *btf_type_var(const struct btf_type *t)
{
return (const struct btf_var *)(t + 1);
}
static const struct btf_var_secinfo *btf_type_var_secinfo(const struct btf_type *t)
{
return (const struct btf_var_secinfo *)(t + 1);
}
static const struct btf_kind_operations *btf_type_ops(const struct btf_type *t) static const struct btf_kind_operations *btf_type_ops(const struct btf_type *t)
{ {
return kind_ops[BTF_INFO_KIND(t->info)]; return kind_ops[BTF_INFO_KIND(t->info)];
@ -478,23 +526,31 @@ static bool btf_name_offset_valid(const struct btf *btf, u32 offset)
offset < btf->hdr.str_len; offset < btf->hdr.str_len;
} }
/* Only C-style identifier is permitted. This can be relaxed if static bool __btf_name_char_ok(char c, bool first, bool dot_ok)
* necessary. {
*/ if ((first ? !isalpha(c) :
static bool btf_name_valid_identifier(const struct btf *btf, u32 offset) !isalnum(c)) &&
c != '_' &&
((c == '.' && !dot_ok) ||
c != '.'))
return false;
return true;
}
static bool __btf_name_valid(const struct btf *btf, u32 offset, bool dot_ok)
{ {
/* offset must be valid */ /* offset must be valid */
const char *src = &btf->strings[offset]; const char *src = &btf->strings[offset];
const char *src_limit; const char *src_limit;
if (!isalpha(*src) && *src != '_') if (!__btf_name_char_ok(*src, true, dot_ok))
return false; return false;
/* set a limit on identifier length */ /* set a limit on identifier length */
src_limit = src + KSYM_NAME_LEN; src_limit = src + KSYM_NAME_LEN;
src++; src++;
while (*src && src < src_limit) { while (*src && src < src_limit) {
if (!isalnum(*src) && *src != '_') if (!__btf_name_char_ok(*src, false, dot_ok))
return false; return false;
src++; src++;
} }
@ -502,6 +558,19 @@ static bool btf_name_valid_identifier(const struct btf *btf, u32 offset)
return !*src; return !*src;
} }
/* Only C-style identifier is permitted. This can be relaxed if
* necessary.
*/
static bool btf_name_valid_identifier(const struct btf *btf, u32 offset)
{
return __btf_name_valid(btf, offset, false);
}
static bool btf_name_valid_section(const struct btf *btf, u32 offset)
{
return __btf_name_valid(btf, offset, true);
}
static const char *__btf_name_by_offset(const struct btf *btf, u32 offset) static const char *__btf_name_by_offset(const struct btf *btf, u32 offset)
{ {
if (!offset) if (!offset)
@ -697,6 +766,32 @@ static void btf_verifier_log_member(struct btf_verifier_env *env,
__btf_verifier_log(log, "\n"); __btf_verifier_log(log, "\n");
} }
__printf(4, 5)
static void btf_verifier_log_vsi(struct btf_verifier_env *env,
const struct btf_type *datasec_type,
const struct btf_var_secinfo *vsi,
const char *fmt, ...)
{
struct bpf_verifier_log *log = &env->log;
va_list args;
if (!bpf_verifier_log_needed(log))
return;
if (env->phase != CHECK_META)
btf_verifier_log_type(env, datasec_type, NULL);
__btf_verifier_log(log, "\t type_id=%u offset=%u size=%u",
vsi->type, vsi->offset, vsi->size);
if (fmt && *fmt) {
__btf_verifier_log(log, " ");
va_start(args, fmt);
bpf_verifier_vlog(log, fmt, args);
va_end(args);
}
__btf_verifier_log(log, "\n");
}
static void btf_verifier_log_hdr(struct btf_verifier_env *env, static void btf_verifier_log_hdr(struct btf_verifier_env *env,
u32 btf_data_size) u32 btf_data_size)
{ {
@ -974,7 +1069,8 @@ const struct btf_type *btf_type_id_size(const struct btf *btf,
} else if (btf_type_is_ptr(size_type)) { } else if (btf_type_is_ptr(size_type)) {
size = sizeof(void *); size = sizeof(void *);
} else { } else {
if (WARN_ON_ONCE(!btf_type_is_modifier(size_type))) if (WARN_ON_ONCE(!btf_type_is_modifier(size_type) &&
!btf_type_is_var(size_type)))
return NULL; return NULL;
size = btf->resolved_sizes[size_type_id]; size = btf->resolved_sizes[size_type_id];
@ -1509,7 +1605,7 @@ static int btf_modifier_resolve(struct btf_verifier_env *env,
u32 next_type_size = 0; u32 next_type_size = 0;
next_type = btf_type_by_id(btf, next_type_id); next_type = btf_type_by_id(btf, next_type_id);
if (!next_type) { if (!next_type || btf_type_is_resolve_source_only(next_type)) {
btf_verifier_log_type(env, v->t, "Invalid type_id"); btf_verifier_log_type(env, v->t, "Invalid type_id");
return -EINVAL; return -EINVAL;
} }
@ -1542,6 +1638,53 @@ static int btf_modifier_resolve(struct btf_verifier_env *env,
return 0; return 0;
} }
static int btf_var_resolve(struct btf_verifier_env *env,
const struct resolve_vertex *v)
{
const struct btf_type *next_type;
const struct btf_type *t = v->t;
u32 next_type_id = t->type;
struct btf *btf = env->btf;
u32 next_type_size;
next_type = btf_type_by_id(btf, next_type_id);
if (!next_type || btf_type_is_resolve_source_only(next_type)) {
btf_verifier_log_type(env, v->t, "Invalid type_id");
return -EINVAL;
}
if (!env_type_is_resolve_sink(env, next_type) &&
!env_type_is_resolved(env, next_type_id))
return env_stack_push(env, next_type, next_type_id);
if (btf_type_is_modifier(next_type)) {
const struct btf_type *resolved_type;
u32 resolved_type_id;
resolved_type_id = next_type_id;
resolved_type = btf_type_id_resolve(btf, &resolved_type_id);
if (btf_type_is_ptr(resolved_type) &&
!env_type_is_resolve_sink(env, resolved_type) &&
!env_type_is_resolved(env, resolved_type_id))
return env_stack_push(env, resolved_type,
resolved_type_id);
}
/* We must resolve to something concrete at this point, no
* forward types or similar that would resolve to size of
* zero is allowed.
*/
if (!btf_type_id_size(btf, &next_type_id, &next_type_size)) {
btf_verifier_log_type(env, v->t, "Invalid type_id");
return -EINVAL;
}
env_stack_pop_resolved(env, next_type_id, next_type_size);
return 0;
}
static int btf_ptr_resolve(struct btf_verifier_env *env, static int btf_ptr_resolve(struct btf_verifier_env *env,
const struct resolve_vertex *v) const struct resolve_vertex *v)
{ {
@ -1551,7 +1694,7 @@ static int btf_ptr_resolve(struct btf_verifier_env *env,
struct btf *btf = env->btf; struct btf *btf = env->btf;
next_type = btf_type_by_id(btf, next_type_id); next_type = btf_type_by_id(btf, next_type_id);
if (!next_type) { if (!next_type || btf_type_is_resolve_source_only(next_type)) {
btf_verifier_log_type(env, v->t, "Invalid type_id"); btf_verifier_log_type(env, v->t, "Invalid type_id");
return -EINVAL; return -EINVAL;
} }
@ -1609,6 +1752,15 @@ static void btf_modifier_seq_show(const struct btf *btf,
btf_type_ops(t)->seq_show(btf, t, type_id, data, bits_offset, m); btf_type_ops(t)->seq_show(btf, t, type_id, data, bits_offset, m);
} }
static void btf_var_seq_show(const struct btf *btf, const struct btf_type *t,
u32 type_id, void *data, u8 bits_offset,
struct seq_file *m)
{
t = btf_type_id_resolve(btf, &type_id);
btf_type_ops(t)->seq_show(btf, t, type_id, data, bits_offset, m);
}
static void btf_ptr_seq_show(const struct btf *btf, const struct btf_type *t, static void btf_ptr_seq_show(const struct btf *btf, const struct btf_type *t,
u32 type_id, void *data, u8 bits_offset, u32 type_id, void *data, u8 bits_offset,
struct seq_file *m) struct seq_file *m)
@ -1776,7 +1928,8 @@ static int btf_array_resolve(struct btf_verifier_env *env,
/* Check array->index_type */ /* Check array->index_type */
index_type_id = array->index_type; index_type_id = array->index_type;
index_type = btf_type_by_id(btf, index_type_id); index_type = btf_type_by_id(btf, index_type_id);
if (btf_type_nosize_or_null(index_type)) { if (btf_type_is_resolve_source_only(index_type) ||
btf_type_nosize_or_null(index_type)) {
btf_verifier_log_type(env, v->t, "Invalid index"); btf_verifier_log_type(env, v->t, "Invalid index");
return -EINVAL; return -EINVAL;
} }
@ -1795,7 +1948,8 @@ static int btf_array_resolve(struct btf_verifier_env *env,
/* Check array->type */ /* Check array->type */
elem_type_id = array->type; elem_type_id = array->type;
elem_type = btf_type_by_id(btf, elem_type_id); elem_type = btf_type_by_id(btf, elem_type_id);
if (btf_type_nosize_or_null(elem_type)) { if (btf_type_is_resolve_source_only(elem_type) ||
btf_type_nosize_or_null(elem_type)) {
btf_verifier_log_type(env, v->t, btf_verifier_log_type(env, v->t,
"Invalid elem"); "Invalid elem");
return -EINVAL; return -EINVAL;
@ -2016,7 +2170,8 @@ static int btf_struct_resolve(struct btf_verifier_env *env,
const struct btf_type *member_type = btf_type_by_id(env->btf, const struct btf_type *member_type = btf_type_by_id(env->btf,
member_type_id); member_type_id);
if (btf_type_nosize_or_null(member_type)) { if (btf_type_is_resolve_source_only(member_type) ||
btf_type_nosize_or_null(member_type)) {
btf_verifier_log_member(env, v->t, member, btf_verifier_log_member(env, v->t, member,
"Invalid member"); "Invalid member");
return -EINVAL; return -EINVAL;
@ -2411,6 +2566,222 @@ static struct btf_kind_operations func_ops = {
.seq_show = btf_df_seq_show, .seq_show = btf_df_seq_show,
}; };
static s32 btf_var_check_meta(struct btf_verifier_env *env,
const struct btf_type *t,
u32 meta_left)
{
const struct btf_var *var;
u32 meta_needed = sizeof(*var);
if (meta_left < meta_needed) {
btf_verifier_log_basic(env, t,
"meta_left:%u meta_needed:%u",
meta_left, meta_needed);
return -EINVAL;
}
if (btf_type_vlen(t)) {
btf_verifier_log_type(env, t, "vlen != 0");
return -EINVAL;
}
if (btf_type_kflag(t)) {
btf_verifier_log_type(env, t, "Invalid btf_info kind_flag");
return -EINVAL;
}
if (!t->name_off ||
!__btf_name_valid(env->btf, t->name_off, true)) {
btf_verifier_log_type(env, t, "Invalid name");
return -EINVAL;
}
/* A var cannot be in type void */
if (!t->type || !BTF_TYPE_ID_VALID(t->type)) {
btf_verifier_log_type(env, t, "Invalid type_id");
return -EINVAL;
}
var = btf_type_var(t);
if (var->linkage != BTF_VAR_STATIC &&
var->linkage != BTF_VAR_GLOBAL_ALLOCATED) {
btf_verifier_log_type(env, t, "Linkage not supported");
return -EINVAL;
}
btf_verifier_log_type(env, t, NULL);
return meta_needed;
}
static void btf_var_log(struct btf_verifier_env *env, const struct btf_type *t)
{
const struct btf_var *var = btf_type_var(t);
btf_verifier_log(env, "type_id=%u linkage=%u", t->type, var->linkage);
}
static const struct btf_kind_operations var_ops = {
.check_meta = btf_var_check_meta,
.resolve = btf_var_resolve,
.check_member = btf_df_check_member,
.check_kflag_member = btf_df_check_kflag_member,
.log_details = btf_var_log,
.seq_show = btf_var_seq_show,
};
static s32 btf_datasec_check_meta(struct btf_verifier_env *env,
const struct btf_type *t,
u32 meta_left)
{
const struct btf_var_secinfo *vsi;
u64 last_vsi_end_off = 0, sum = 0;
u32 i, meta_needed;
meta_needed = btf_type_vlen(t) * sizeof(*vsi);
if (meta_left < meta_needed) {
btf_verifier_log_basic(env, t,
"meta_left:%u meta_needed:%u",
meta_left, meta_needed);
return -EINVAL;
}
if (!btf_type_vlen(t)) {
btf_verifier_log_type(env, t, "vlen == 0");
return -EINVAL;
}
if (!t->size) {
btf_verifier_log_type(env, t, "size == 0");
return -EINVAL;
}
if (btf_type_kflag(t)) {
btf_verifier_log_type(env, t, "Invalid btf_info kind_flag");
return -EINVAL;
}
if (!t->name_off ||
!btf_name_valid_section(env->btf, t->name_off)) {
btf_verifier_log_type(env, t, "Invalid name");
return -EINVAL;
}
btf_verifier_log_type(env, t, NULL);
for_each_vsi(i, t, vsi) {
/* A var cannot be in type void */
if (!vsi->type || !BTF_TYPE_ID_VALID(vsi->type)) {
btf_verifier_log_vsi(env, t, vsi,
"Invalid type_id");
return -EINVAL;
}
if (vsi->offset < last_vsi_end_off || vsi->offset >= t->size) {
btf_verifier_log_vsi(env, t, vsi,
"Invalid offset");
return -EINVAL;
}
if (!vsi->size || vsi->size > t->size) {
btf_verifier_log_vsi(env, t, vsi,
"Invalid size");
return -EINVAL;
}
last_vsi_end_off = vsi->offset + vsi->size;
if (last_vsi_end_off > t->size) {
btf_verifier_log_vsi(env, t, vsi,
"Invalid offset+size");
return -EINVAL;
}
btf_verifier_log_vsi(env, t, vsi, NULL);
sum += vsi->size;
}
if (t->size < sum) {
btf_verifier_log_type(env, t, "Invalid btf_info size");
return -EINVAL;
}
return meta_needed;
}
static int btf_datasec_resolve(struct btf_verifier_env *env,
const struct resolve_vertex *v)
{
const struct btf_var_secinfo *vsi;
struct btf *btf = env->btf;
u16 i;
for_each_vsi_from(i, v->next_member, v->t, vsi) {
u32 var_type_id = vsi->type, type_id, type_size = 0;
const struct btf_type *var_type = btf_type_by_id(env->btf,
var_type_id);
if (!var_type || !btf_type_is_var(var_type)) {
btf_verifier_log_vsi(env, v->t, vsi,
"Not a VAR kind member");
return -EINVAL;
}
if (!env_type_is_resolve_sink(env, var_type) &&
!env_type_is_resolved(env, var_type_id)) {
env_stack_set_next_member(env, i + 1);
return env_stack_push(env, var_type, var_type_id);
}
type_id = var_type->type;
if (!btf_type_id_size(btf, &type_id, &type_size)) {
btf_verifier_log_vsi(env, v->t, vsi, "Invalid type");
return -EINVAL;
}
if (vsi->size < type_size) {
btf_verifier_log_vsi(env, v->t, vsi, "Invalid size");
return -EINVAL;
}
}
env_stack_pop_resolved(env, 0, 0);
return 0;
}
static void btf_datasec_log(struct btf_verifier_env *env,
const struct btf_type *t)
{
btf_verifier_log(env, "size=%u vlen=%u", t->size, btf_type_vlen(t));
}
static void btf_datasec_seq_show(const struct btf *btf,
const struct btf_type *t, u32 type_id,
void *data, u8 bits_offset,
struct seq_file *m)
{
const struct btf_var_secinfo *vsi;
const struct btf_type *var;
u32 i;
seq_printf(m, "section (\"%s\") = {", __btf_name_by_offset(btf, t->name_off));
for_each_vsi(i, t, vsi) {
var = btf_type_by_id(btf, vsi->type);
if (i)
seq_puts(m, ",");
btf_type_ops(var)->seq_show(btf, var, vsi->type,
data + vsi->offset, bits_offset, m);
}
seq_puts(m, "}");
}
static const struct btf_kind_operations datasec_ops = {
.check_meta = btf_datasec_check_meta,
.resolve = btf_datasec_resolve,
.check_member = btf_df_check_member,
.check_kflag_member = btf_df_check_kflag_member,
.log_details = btf_datasec_log,
.seq_show = btf_datasec_seq_show,
};
static int btf_func_proto_check(struct btf_verifier_env *env, static int btf_func_proto_check(struct btf_verifier_env *env,
const struct btf_type *t) const struct btf_type *t)
{ {
@ -2542,6 +2913,8 @@ static const struct btf_kind_operations * const kind_ops[NR_BTF_KINDS] = {
[BTF_KIND_RESTRICT] = &modifier_ops, [BTF_KIND_RESTRICT] = &modifier_ops,
[BTF_KIND_FUNC] = &func_ops, [BTF_KIND_FUNC] = &func_ops,
[BTF_KIND_FUNC_PROTO] = &func_proto_ops, [BTF_KIND_FUNC_PROTO] = &func_proto_ops,
[BTF_KIND_VAR] = &var_ops,
[BTF_KIND_DATASEC] = &datasec_ops,
}; };
static s32 btf_check_meta(struct btf_verifier_env *env, static s32 btf_check_meta(struct btf_verifier_env *env,
@ -2622,13 +2995,17 @@ static bool btf_resolve_valid(struct btf_verifier_env *env,
if (!env_type_is_resolved(env, type_id)) if (!env_type_is_resolved(env, type_id))
return false; return false;
if (btf_type_is_struct(t)) if (btf_type_is_struct(t) || btf_type_is_datasec(t))
return !btf->resolved_ids[type_id] && return !btf->resolved_ids[type_id] &&
!btf->resolved_sizes[type_id]; !btf->resolved_sizes[type_id];
if (btf_type_is_modifier(t) || btf_type_is_ptr(t)) { if (btf_type_is_modifier(t) || btf_type_is_ptr(t) ||
btf_type_is_var(t)) {
t = btf_type_id_resolve(btf, &type_id); t = btf_type_id_resolve(btf, &type_id);
return t && !btf_type_is_modifier(t); return t &&
!btf_type_is_modifier(t) &&
!btf_type_is_var(t) &&
!btf_type_is_datasec(t);
} }
if (btf_type_is_array(t)) { if (btf_type_is_array(t)) {

Просмотреть файл

@ -292,7 +292,8 @@ int bpf_prog_calc_tag(struct bpf_prog *fp)
dst[i] = fp->insnsi[i]; dst[i] = fp->insnsi[i];
if (!was_ld_map && if (!was_ld_map &&
dst[i].code == (BPF_LD | BPF_IMM | BPF_DW) && dst[i].code == (BPF_LD | BPF_IMM | BPF_DW) &&
dst[i].src_reg == BPF_PSEUDO_MAP_FD) { (dst[i].src_reg == BPF_PSEUDO_MAP_FD ||
dst[i].src_reg == BPF_PSEUDO_MAP_VALUE)) {
was_ld_map = true; was_ld_map = true;
dst[i].imm = 0; dst[i].imm = 0;
} else if (was_ld_map && } else if (was_ld_map &&

Просмотреть файл

@ -205,10 +205,11 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs,
* part of the ldimm64 insn is accessible. * part of the ldimm64 insn is accessible.
*/ */
u64 imm = ((u64)(insn + 1)->imm << 32) | (u32)insn->imm; u64 imm = ((u64)(insn + 1)->imm << 32) | (u32)insn->imm;
bool map_ptr = insn->src_reg == BPF_PSEUDO_MAP_FD; bool is_ptr = insn->src_reg == BPF_PSEUDO_MAP_FD ||
insn->src_reg == BPF_PSEUDO_MAP_VALUE;
char tmp[64]; char tmp[64];
if (map_ptr && !allow_ptr_leaks) if (is_ptr && !allow_ptr_leaks)
imm = 0; imm = 0;
verbose(cbs->private_data, "(%02x) r%d = %s\n", verbose(cbs->private_data, "(%02x) r%d = %s\n",

Просмотреть файл

@ -23,7 +23,7 @@
#define HTAB_CREATE_FLAG_MASK \ #define HTAB_CREATE_FLAG_MASK \
(BPF_F_NO_PREALLOC | BPF_F_NO_COMMON_LRU | BPF_F_NUMA_NODE | \ (BPF_F_NO_PREALLOC | BPF_F_NO_COMMON_LRU | BPF_F_NUMA_NODE | \
BPF_F_RDONLY | BPF_F_WRONLY | BPF_F_ZERO_SEED) BPF_F_ACCESS_MASK | BPF_F_ZERO_SEED)
struct bucket { struct bucket {
struct hlist_nulls_head head; struct hlist_nulls_head head;
@ -262,8 +262,8 @@ static int htab_map_alloc_check(union bpf_attr *attr)
/* Guard against local DoS, and discourage production use. */ /* Guard against local DoS, and discourage production use. */
return -EPERM; return -EPERM;
if (attr->map_flags & ~HTAB_CREATE_FLAG_MASK) if (attr->map_flags & ~HTAB_CREATE_FLAG_MASK ||
/* reserved bits should not be used */ !bpf_map_flags_access_ok(attr->map_flags))
return -EINVAL; return -EINVAL;
if (!lru && percpu_lru) if (!lru && percpu_lru)

Просмотреть файл

@ -14,7 +14,7 @@ DEFINE_PER_CPU(struct bpf_cgroup_storage*, bpf_cgroup_storage[MAX_BPF_CGROUP_STO
#ifdef CONFIG_CGROUP_BPF #ifdef CONFIG_CGROUP_BPF
#define LOCAL_STORAGE_CREATE_FLAG_MASK \ #define LOCAL_STORAGE_CREATE_FLAG_MASK \
(BPF_F_NUMA_NODE | BPF_F_RDONLY | BPF_F_WRONLY) (BPF_F_NUMA_NODE | BPF_F_ACCESS_MASK)
struct bpf_cgroup_storage_map { struct bpf_cgroup_storage_map {
struct bpf_map map; struct bpf_map map;
@ -282,8 +282,8 @@ static struct bpf_map *cgroup_storage_map_alloc(union bpf_attr *attr)
if (attr->value_size > PAGE_SIZE) if (attr->value_size > PAGE_SIZE)
return ERR_PTR(-E2BIG); return ERR_PTR(-E2BIG);
if (attr->map_flags & ~LOCAL_STORAGE_CREATE_FLAG_MASK) if (attr->map_flags & ~LOCAL_STORAGE_CREATE_FLAG_MASK ||
/* reserved bits should not be used */ !bpf_map_flags_access_ok(attr->map_flags))
return ERR_PTR(-EINVAL); return ERR_PTR(-EINVAL);
if (attr->max_entries) if (attr->max_entries)

Просмотреть файл

@ -538,7 +538,7 @@ out:
#define LPM_KEY_SIZE_MIN LPM_KEY_SIZE(LPM_DATA_SIZE_MIN) #define LPM_KEY_SIZE_MIN LPM_KEY_SIZE(LPM_DATA_SIZE_MIN)
#define LPM_CREATE_FLAG_MASK (BPF_F_NO_PREALLOC | BPF_F_NUMA_NODE | \ #define LPM_CREATE_FLAG_MASK (BPF_F_NO_PREALLOC | BPF_F_NUMA_NODE | \
BPF_F_RDONLY | BPF_F_WRONLY) BPF_F_ACCESS_MASK)
static struct bpf_map *trie_alloc(union bpf_attr *attr) static struct bpf_map *trie_alloc(union bpf_attr *attr)
{ {
@ -553,6 +553,7 @@ static struct bpf_map *trie_alloc(union bpf_attr *attr)
if (attr->max_entries == 0 || if (attr->max_entries == 0 ||
!(attr->map_flags & BPF_F_NO_PREALLOC) || !(attr->map_flags & BPF_F_NO_PREALLOC) ||
attr->map_flags & ~LPM_CREATE_FLAG_MASK || attr->map_flags & ~LPM_CREATE_FLAG_MASK ||
!bpf_map_flags_access_ok(attr->map_flags) ||
attr->key_size < LPM_KEY_SIZE_MIN || attr->key_size < LPM_KEY_SIZE_MIN ||
attr->key_size > LPM_KEY_SIZE_MAX || attr->key_size > LPM_KEY_SIZE_MAX ||
attr->value_size < LPM_VAL_SIZE_MIN || attr->value_size < LPM_VAL_SIZE_MIN ||

Просмотреть файл

@ -11,8 +11,7 @@
#include "percpu_freelist.h" #include "percpu_freelist.h"
#define QUEUE_STACK_CREATE_FLAG_MASK \ #define QUEUE_STACK_CREATE_FLAG_MASK \
(BPF_F_NUMA_NODE | BPF_F_RDONLY | BPF_F_WRONLY) (BPF_F_NUMA_NODE | BPF_F_ACCESS_MASK)
struct bpf_queue_stack { struct bpf_queue_stack {
struct bpf_map map; struct bpf_map map;
@ -52,7 +51,8 @@ static int queue_stack_map_alloc_check(union bpf_attr *attr)
/* check sanity of attributes */ /* check sanity of attributes */
if (attr->max_entries == 0 || attr->key_size != 0 || if (attr->max_entries == 0 || attr->key_size != 0 ||
attr->value_size == 0 || attr->value_size == 0 ||
attr->map_flags & ~QUEUE_STACK_CREATE_FLAG_MASK) attr->map_flags & ~QUEUE_STACK_CREATE_FLAG_MASK ||
!bpf_map_flags_access_ok(attr->map_flags))
return -EINVAL; return -EINVAL;
if (attr->value_size > KMALLOC_MAX_SIZE) if (attr->value_size > KMALLOC_MAX_SIZE)

Просмотреть файл

@ -166,13 +166,25 @@ void bpf_map_area_free(void *area)
kvfree(area); kvfree(area);
} }
static u32 bpf_map_flags_retain_permanent(u32 flags)
{
/* Some map creation flags are not tied to the map object but
* rather to the map fd instead, so they have no meaning upon
* map object inspection since multiple file descriptors with
* different (access) properties can exist here. Thus, given
* this has zero meaning for the map itself, lets clear these
* from here.
*/
return flags & ~(BPF_F_RDONLY | BPF_F_WRONLY);
}
void bpf_map_init_from_attr(struct bpf_map *map, union bpf_attr *attr) void bpf_map_init_from_attr(struct bpf_map *map, union bpf_attr *attr)
{ {
map->map_type = attr->map_type; map->map_type = attr->map_type;
map->key_size = attr->key_size; map->key_size = attr->key_size;
map->value_size = attr->value_size; map->value_size = attr->value_size;
map->max_entries = attr->max_entries; map->max_entries = attr->max_entries;
map->map_flags = attr->map_flags; map->map_flags = bpf_map_flags_retain_permanent(attr->map_flags);
map->numa_node = bpf_map_attr_numa_node(attr); map->numa_node = bpf_map_attr_numa_node(attr);
} }
@ -343,6 +355,18 @@ static int bpf_map_release(struct inode *inode, struct file *filp)
return 0; return 0;
} }
static fmode_t map_get_sys_perms(struct bpf_map *map, struct fd f)
{
fmode_t mode = f.file->f_mode;
/* Our file permissions may have been overridden by global
* map permissions facing syscall side.
*/
if (READ_ONCE(map->frozen))
mode &= ~FMODE_CAN_WRITE;
return mode;
}
#ifdef CONFIG_PROC_FS #ifdef CONFIG_PROC_FS
static void bpf_map_show_fdinfo(struct seq_file *m, struct file *filp) static void bpf_map_show_fdinfo(struct seq_file *m, struct file *filp)
{ {
@ -364,14 +388,16 @@ static void bpf_map_show_fdinfo(struct seq_file *m, struct file *filp)
"max_entries:\t%u\n" "max_entries:\t%u\n"
"map_flags:\t%#x\n" "map_flags:\t%#x\n"
"memlock:\t%llu\n" "memlock:\t%llu\n"
"map_id:\t%u\n", "map_id:\t%u\n"
"frozen:\t%u\n",
map->map_type, map->map_type,
map->key_size, map->key_size,
map->value_size, map->value_size,
map->max_entries, map->max_entries,
map->map_flags, map->map_flags,
map->pages * 1ULL << PAGE_SHIFT, map->pages * 1ULL << PAGE_SHIFT,
map->id); map->id,
READ_ONCE(map->frozen));
if (owner_prog_type) { if (owner_prog_type) {
seq_printf(m, "owner_prog_type:\t%u\n", seq_printf(m, "owner_prog_type:\t%u\n",
@ -448,10 +474,10 @@ static int bpf_obj_name_cpy(char *dst, const char *src)
const char *end = src + BPF_OBJ_NAME_LEN; const char *end = src + BPF_OBJ_NAME_LEN;
memset(dst, 0, BPF_OBJ_NAME_LEN); memset(dst, 0, BPF_OBJ_NAME_LEN);
/* Copy all isalnum(), '_' and '.' chars. */
/* Copy all isalnum() and '_' char */
while (src < end && *src) { while (src < end && *src) {
if (!isalnum(*src) && *src != '_') if (!isalnum(*src) &&
*src != '_' && *src != '.')
return -EINVAL; return -EINVAL;
*dst++ = *src++; *dst++ = *src++;
} }
@ -478,9 +504,16 @@ static int map_check_btf(struct bpf_map *map, const struct btf *btf,
u32 key_size, value_size; u32 key_size, value_size;
int ret = 0; int ret = 0;
key_type = btf_type_id_size(btf, &btf_key_id, &key_size); /* Some maps allow key to be unspecified. */
if (!key_type || key_size != map->key_size) if (btf_key_id) {
return -EINVAL; key_type = btf_type_id_size(btf, &btf_key_id, &key_size);
if (!key_type || key_size != map->key_size)
return -EINVAL;
} else {
key_type = btf_type_by_id(btf, 0);
if (!map->ops->map_check_btf)
return -EINVAL;
}
value_type = btf_type_id_size(btf, &btf_value_id, &value_size); value_type = btf_type_id_size(btf, &btf_value_id, &value_size);
if (!value_type || value_size != map->value_size) if (!value_type || value_size != map->value_size)
@ -489,6 +522,8 @@ static int map_check_btf(struct bpf_map *map, const struct btf *btf,
map->spin_lock_off = btf_find_spin_lock(btf, value_type); map->spin_lock_off = btf_find_spin_lock(btf, value_type);
if (map_value_has_spin_lock(map)) { if (map_value_has_spin_lock(map)) {
if (map->map_flags & BPF_F_RDONLY_PROG)
return -EACCES;
if (map->map_type != BPF_MAP_TYPE_HASH && if (map->map_type != BPF_MAP_TYPE_HASH &&
map->map_type != BPF_MAP_TYPE_ARRAY && map->map_type != BPF_MAP_TYPE_ARRAY &&
map->map_type != BPF_MAP_TYPE_CGROUP_STORAGE) map->map_type != BPF_MAP_TYPE_CGROUP_STORAGE)
@ -545,7 +580,7 @@ static int map_create(union bpf_attr *attr)
if (attr->btf_key_type_id || attr->btf_value_type_id) { if (attr->btf_key_type_id || attr->btf_value_type_id) {
struct btf *btf; struct btf *btf;
if (!attr->btf_key_type_id || !attr->btf_value_type_id) { if (!attr->btf_value_type_id) {
err = -EINVAL; err = -EINVAL;
goto free_map_nouncharge; goto free_map_nouncharge;
} }
@ -713,8 +748,7 @@ static int map_lookup_elem(union bpf_attr *attr)
map = __bpf_map_get(f); map = __bpf_map_get(f);
if (IS_ERR(map)) if (IS_ERR(map))
return PTR_ERR(map); return PTR_ERR(map);
if (!(map_get_sys_perms(map, f) & FMODE_CAN_READ)) {
if (!(f.file->f_mode & FMODE_CAN_READ)) {
err = -EPERM; err = -EPERM;
goto err_put; goto err_put;
} }
@ -843,8 +877,7 @@ static int map_update_elem(union bpf_attr *attr)
map = __bpf_map_get(f); map = __bpf_map_get(f);
if (IS_ERR(map)) if (IS_ERR(map))
return PTR_ERR(map); return PTR_ERR(map);
if (!(map_get_sys_perms(map, f) & FMODE_CAN_WRITE)) {
if (!(f.file->f_mode & FMODE_CAN_WRITE)) {
err = -EPERM; err = -EPERM;
goto err_put; goto err_put;
} }
@ -955,8 +988,7 @@ static int map_delete_elem(union bpf_attr *attr)
map = __bpf_map_get(f); map = __bpf_map_get(f);
if (IS_ERR(map)) if (IS_ERR(map))
return PTR_ERR(map); return PTR_ERR(map);
if (!(map_get_sys_perms(map, f) & FMODE_CAN_WRITE)) {
if (!(f.file->f_mode & FMODE_CAN_WRITE)) {
err = -EPERM; err = -EPERM;
goto err_put; goto err_put;
} }
@ -1007,8 +1039,7 @@ static int map_get_next_key(union bpf_attr *attr)
map = __bpf_map_get(f); map = __bpf_map_get(f);
if (IS_ERR(map)) if (IS_ERR(map))
return PTR_ERR(map); return PTR_ERR(map);
if (!(map_get_sys_perms(map, f) & FMODE_CAN_READ)) {
if (!(f.file->f_mode & FMODE_CAN_READ)) {
err = -EPERM; err = -EPERM;
goto err_put; goto err_put;
} }
@ -1075,8 +1106,7 @@ static int map_lookup_and_delete_elem(union bpf_attr *attr)
map = __bpf_map_get(f); map = __bpf_map_get(f);
if (IS_ERR(map)) if (IS_ERR(map))
return PTR_ERR(map); return PTR_ERR(map);
if (!(map_get_sys_perms(map, f) & FMODE_CAN_WRITE)) {
if (!(f.file->f_mode & FMODE_CAN_WRITE)) {
err = -EPERM; err = -EPERM;
goto err_put; goto err_put;
} }
@ -1118,6 +1148,36 @@ err_put:
return err; return err;
} }
#define BPF_MAP_FREEZE_LAST_FIELD map_fd
static int map_freeze(const union bpf_attr *attr)
{
int err = 0, ufd = attr->map_fd;
struct bpf_map *map;
struct fd f;
if (CHECK_ATTR(BPF_MAP_FREEZE))
return -EINVAL;
f = fdget(ufd);
map = __bpf_map_get(f);
if (IS_ERR(map))
return PTR_ERR(map);
if (READ_ONCE(map->frozen)) {
err = -EBUSY;
goto err_put;
}
if (!capable(CAP_SYS_ADMIN)) {
err = -EPERM;
goto err_put;
}
WRITE_ONCE(map->frozen, true);
err_put:
fdput(f);
return err;
}
static const struct bpf_prog_ops * const bpf_prog_types[] = { static const struct bpf_prog_ops * const bpf_prog_types[] = {
#define BPF_PROG_TYPE(_id, _name) \ #define BPF_PROG_TYPE(_id, _name) \
[_id] = & _name ## _prog_ops, [_id] = & _name ## _prog_ops,
@ -2072,13 +2132,26 @@ static int bpf_map_get_fd_by_id(const union bpf_attr *attr)
} }
static const struct bpf_map *bpf_map_from_imm(const struct bpf_prog *prog, static const struct bpf_map *bpf_map_from_imm(const struct bpf_prog *prog,
unsigned long addr) unsigned long addr, u32 *off,
u32 *type)
{ {
const struct bpf_map *map;
int i; int i;
for (i = 0; i < prog->aux->used_map_cnt; i++) for (i = 0, *off = 0; i < prog->aux->used_map_cnt; i++) {
if (prog->aux->used_maps[i] == (void *)addr) map = prog->aux->used_maps[i];
return prog->aux->used_maps[i]; if (map == (void *)addr) {
*type = BPF_PSEUDO_MAP_FD;
return map;
}
if (!map->ops->map_direct_value_meta)
continue;
if (!map->ops->map_direct_value_meta(map, addr, off)) {
*type = BPF_PSEUDO_MAP_VALUE;
return map;
}
}
return NULL; return NULL;
} }
@ -2086,6 +2159,7 @@ static struct bpf_insn *bpf_insn_prepare_dump(const struct bpf_prog *prog)
{ {
const struct bpf_map *map; const struct bpf_map *map;
struct bpf_insn *insns; struct bpf_insn *insns;
u32 off, type;
u64 imm; u64 imm;
int i; int i;
@ -2113,11 +2187,11 @@ static struct bpf_insn *bpf_insn_prepare_dump(const struct bpf_prog *prog)
continue; continue;
imm = ((u64)insns[i + 1].imm << 32) | (u32)insns[i].imm; imm = ((u64)insns[i + 1].imm << 32) | (u32)insns[i].imm;
map = bpf_map_from_imm(prog, imm); map = bpf_map_from_imm(prog, imm, &off, &type);
if (map) { if (map) {
insns[i].src_reg = BPF_PSEUDO_MAP_FD; insns[i].src_reg = type;
insns[i].imm = map->id; insns[i].imm = map->id;
insns[i + 1].imm = 0; insns[i + 1].imm = off;
continue; continue;
} }
} }
@ -2707,6 +2781,9 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz
case BPF_MAP_GET_NEXT_KEY: case BPF_MAP_GET_NEXT_KEY:
err = map_get_next_key(&attr); err = map_get_next_key(&attr);
break; break;
case BPF_MAP_FREEZE:
err = map_freeze(&attr);
break;
case BPF_PROG_LOAD: case BPF_PROG_LOAD:
err = bpf_prog_load(&attr, uattr); err = bpf_prog_load(&attr, uattr);
break; break;

Просмотреть файл

@ -1439,6 +1439,28 @@ static int check_stack_access(struct bpf_verifier_env *env,
return 0; return 0;
} }
static int check_map_access_type(struct bpf_verifier_env *env, u32 regno,
int off, int size, enum bpf_access_type type)
{
struct bpf_reg_state *regs = cur_regs(env);
struct bpf_map *map = regs[regno].map_ptr;
u32 cap = bpf_map_flags_to_cap(map);
if (type == BPF_WRITE && !(cap & BPF_MAP_CAN_WRITE)) {
verbose(env, "write into map forbidden, value_size=%d off=%d size=%d\n",
map->value_size, off, size);
return -EACCES;
}
if (type == BPF_READ && !(cap & BPF_MAP_CAN_READ)) {
verbose(env, "read from map forbidden, value_size=%d off=%d size=%d\n",
map->value_size, off, size);
return -EACCES;
}
return 0;
}
/* check read/write into map element returned by bpf_map_lookup_elem() */ /* check read/write into map element returned by bpf_map_lookup_elem() */
static int __check_map_access(struct bpf_verifier_env *env, u32 regno, int off, static int __check_map_access(struct bpf_verifier_env *env, u32 regno, int off,
int size, bool zero_size_allowed) int size, bool zero_size_allowed)
@ -2024,7 +2046,9 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
verbose(env, "R%d leaks addr into map\n", value_regno); verbose(env, "R%d leaks addr into map\n", value_regno);
return -EACCES; return -EACCES;
} }
err = check_map_access_type(env, regno, off, size, t);
if (err)
return err;
err = check_map_access(env, regno, off, size, false); err = check_map_access(env, regno, off, size, false);
if (!err && t == BPF_READ && value_regno >= 0) if (!err && t == BPF_READ && value_regno >= 0)
mark_reg_unknown(env, regs, value_regno); mark_reg_unknown(env, regs, value_regno);
@ -2327,6 +2351,10 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
return check_packet_access(env, regno, reg->off, access_size, return check_packet_access(env, regno, reg->off, access_size,
zero_size_allowed); zero_size_allowed);
case PTR_TO_MAP_VALUE: case PTR_TO_MAP_VALUE:
if (check_map_access_type(env, regno, reg->off, access_size,
meta && meta->raw_mode ? BPF_WRITE :
BPF_READ))
return -EACCES;
return check_map_access(env, regno, reg->off, access_size, return check_map_access(env, regno, reg->off, access_size,
zero_size_allowed); zero_size_allowed);
default: /* scalar_value|ptr_to_stack or invalid ptr */ default: /* scalar_value|ptr_to_stack or invalid ptr */
@ -3059,6 +3087,7 @@ record_func_map(struct bpf_verifier_env *env, struct bpf_call_arg_meta *meta,
int func_id, int insn_idx) int func_id, int insn_idx)
{ {
struct bpf_insn_aux_data *aux = &env->insn_aux_data[insn_idx]; struct bpf_insn_aux_data *aux = &env->insn_aux_data[insn_idx];
struct bpf_map *map = meta->map_ptr;
if (func_id != BPF_FUNC_tail_call && if (func_id != BPF_FUNC_tail_call &&
func_id != BPF_FUNC_map_lookup_elem && func_id != BPF_FUNC_map_lookup_elem &&
@ -3069,11 +3098,24 @@ record_func_map(struct bpf_verifier_env *env, struct bpf_call_arg_meta *meta,
func_id != BPF_FUNC_map_peek_elem) func_id != BPF_FUNC_map_peek_elem)
return 0; return 0;
if (meta->map_ptr == NULL) { if (map == NULL) {
verbose(env, "kernel subsystem misconfigured verifier\n"); verbose(env, "kernel subsystem misconfigured verifier\n");
return -EINVAL; return -EINVAL;
} }
/* In case of read-only, some additional restrictions
* need to be applied in order to prevent altering the
* state of the map from program side.
*/
if ((map->map_flags & BPF_F_RDONLY_PROG) &&
(func_id == BPF_FUNC_map_delete_elem ||
func_id == BPF_FUNC_map_update_elem ||
func_id == BPF_FUNC_map_push_elem ||
func_id == BPF_FUNC_map_pop_elem)) {
verbose(env, "write into map forbidden\n");
return -EACCES;
}
if (!BPF_MAP_PTR(aux->map_state)) if (!BPF_MAP_PTR(aux->map_state))
bpf_map_ptr_store(aux, meta->map_ptr, bpf_map_ptr_store(aux, meta->map_ptr,
meta->map_ptr->unpriv_array); meta->map_ptr->unpriv_array);
@ -5056,18 +5098,12 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
return 0; return 0;
} }
/* return the map pointer stored inside BPF_LD_IMM64 instruction */
static struct bpf_map *ld_imm64_to_map_ptr(struct bpf_insn *insn)
{
u64 imm64 = ((u64) (u32) insn[0].imm) | ((u64) (u32) insn[1].imm) << 32;
return (struct bpf_map *) (unsigned long) imm64;
}
/* verify BPF_LD_IMM64 instruction */ /* verify BPF_LD_IMM64 instruction */
static int check_ld_imm(struct bpf_verifier_env *env, struct bpf_insn *insn) static int check_ld_imm(struct bpf_verifier_env *env, struct bpf_insn *insn)
{ {
struct bpf_insn_aux_data *aux = cur_aux(env);
struct bpf_reg_state *regs = cur_regs(env); struct bpf_reg_state *regs = cur_regs(env);
struct bpf_map *map;
int err; int err;
if (BPF_SIZE(insn->code) != BPF_DW) { if (BPF_SIZE(insn->code) != BPF_DW) {
@ -5091,11 +5127,22 @@ static int check_ld_imm(struct bpf_verifier_env *env, struct bpf_insn *insn)
return 0; return 0;
} }
/* replace_map_fd_with_map_ptr() should have caught bad ld_imm64 */ map = env->used_maps[aux->map_index];
BUG_ON(insn->src_reg != BPF_PSEUDO_MAP_FD); mark_reg_known_zero(env, regs, insn->dst_reg);
regs[insn->dst_reg].map_ptr = map;
if (insn->src_reg == BPF_PSEUDO_MAP_VALUE) {
regs[insn->dst_reg].type = PTR_TO_MAP_VALUE;
regs[insn->dst_reg].off = aux->map_off;
if (map_value_has_spin_lock(map))
regs[insn->dst_reg].id = ++env->id_gen;
} else if (insn->src_reg == BPF_PSEUDO_MAP_FD) {
regs[insn->dst_reg].type = CONST_PTR_TO_MAP;
} else {
verbose(env, "bpf verifier is misconfigured\n");
return -EINVAL;
}
regs[insn->dst_reg].type = CONST_PTR_TO_MAP;
regs[insn->dst_reg].map_ptr = ld_imm64_to_map_ptr(insn);
return 0; return 0;
} }
@ -6803,8 +6850,10 @@ static int replace_map_fd_with_map_ptr(struct bpf_verifier_env *env)
} }
if (insn[0].code == (BPF_LD | BPF_IMM | BPF_DW)) { if (insn[0].code == (BPF_LD | BPF_IMM | BPF_DW)) {
struct bpf_insn_aux_data *aux;
struct bpf_map *map; struct bpf_map *map;
struct fd f; struct fd f;
u64 addr;
if (i == insn_cnt - 1 || insn[1].code != 0 || if (i == insn_cnt - 1 || insn[1].code != 0 ||
insn[1].dst_reg != 0 || insn[1].src_reg != 0 || insn[1].dst_reg != 0 || insn[1].src_reg != 0 ||
@ -6813,13 +6862,19 @@ static int replace_map_fd_with_map_ptr(struct bpf_verifier_env *env)
return -EINVAL; return -EINVAL;
} }
if (insn->src_reg == 0) if (insn[0].src_reg == 0)
/* valid generic load 64-bit imm */ /* valid generic load 64-bit imm */
goto next_insn; goto next_insn;
if (insn[0].src_reg != BPF_PSEUDO_MAP_FD || /* In final convert_pseudo_ld_imm64() step, this is
insn[1].imm != 0) { * converted into regular 64-bit imm load insn.
verbose(env, "unrecognized bpf_ld_imm64 insn\n"); */
if ((insn[0].src_reg != BPF_PSEUDO_MAP_FD &&
insn[0].src_reg != BPF_PSEUDO_MAP_VALUE) ||
(insn[0].src_reg == BPF_PSEUDO_MAP_FD &&
insn[1].imm != 0)) {
verbose(env,
"unrecognized bpf_ld_imm64 insn\n");
return -EINVAL; return -EINVAL;
} }
@ -6837,16 +6892,47 @@ static int replace_map_fd_with_map_ptr(struct bpf_verifier_env *env)
return err; return err;
} }
/* store map pointer inside BPF_LD_IMM64 instruction */ aux = &env->insn_aux_data[i];
insn[0].imm = (u32) (unsigned long) map; if (insn->src_reg == BPF_PSEUDO_MAP_FD) {
insn[1].imm = ((u64) (unsigned long) map) >> 32; addr = (unsigned long)map;
} else {
u32 off = insn[1].imm;
if (off >= BPF_MAX_VAR_OFF) {
verbose(env, "direct value offset of %u is not allowed\n", off);
fdput(f);
return -EINVAL;
}
if (!map->ops->map_direct_value_addr) {
verbose(env, "no direct value access support for this map type\n");
fdput(f);
return -EINVAL;
}
err = map->ops->map_direct_value_addr(map, &addr, off);
if (err) {
verbose(env, "invalid access to map value pointer, value_size=%u off=%u\n",
map->value_size, off);
fdput(f);
return err;
}
aux->map_off = off;
addr += off;
}
insn[0].imm = (u32)addr;
insn[1].imm = addr >> 32;
/* check whether we recorded this map already */ /* check whether we recorded this map already */
for (j = 0; j < env->used_map_cnt; j++) for (j = 0; j < env->used_map_cnt; j++) {
if (env->used_maps[j] == map) { if (env->used_maps[j] == map) {
aux->map_index = j;
fdput(f); fdput(f);
goto next_insn; goto next_insn;
} }
}
if (env->used_map_cnt >= MAX_USED_MAPS) { if (env->used_map_cnt >= MAX_USED_MAPS) {
fdput(f); fdput(f);
@ -6863,6 +6949,8 @@ static int replace_map_fd_with_map_ptr(struct bpf_verifier_env *env)
fdput(f); fdput(f);
return PTR_ERR(map); return PTR_ERR(map);
} }
aux->map_index = env->used_map_cnt;
env->used_maps[env->used_map_cnt++] = map; env->used_maps[env->used_map_cnt++] = map;
if (bpf_map_is_cgroup_storage(map) && if (bpf_map_is_cgroup_storage(map) &&

Просмотреть файл

@ -309,6 +309,48 @@ static int btf_dumper_struct(const struct btf_dumper *d, __u32 type_id,
return ret; return ret;
} }
static int btf_dumper_var(const struct btf_dumper *d, __u32 type_id,
__u8 bit_offset, const void *data)
{
const struct btf_type *t = btf__type_by_id(d->btf, type_id);
int ret;
jsonw_start_object(d->jw);
jsonw_name(d->jw, btf__name_by_offset(d->btf, t->name_off));
ret = btf_dumper_do_type(d, t->type, bit_offset, data);
jsonw_end_object(d->jw);
return ret;
}
static int btf_dumper_datasec(const struct btf_dumper *d, __u32 type_id,
const void *data)
{
struct btf_var_secinfo *vsi;
const struct btf_type *t;
int ret = 0, i, vlen;
t = btf__type_by_id(d->btf, type_id);
if (!t)
return -EINVAL;
vlen = BTF_INFO_VLEN(t->info);
vsi = (struct btf_var_secinfo *)(t + 1);
jsonw_start_object(d->jw);
jsonw_name(d->jw, btf__name_by_offset(d->btf, t->name_off));
jsonw_start_array(d->jw);
for (i = 0; i < vlen; i++) {
ret = btf_dumper_do_type(d, vsi[i].type, 0, data + vsi[i].offset);
if (ret)
break;
}
jsonw_end_array(d->jw);
jsonw_end_object(d->jw);
return ret;
}
static int btf_dumper_do_type(const struct btf_dumper *d, __u32 type_id, static int btf_dumper_do_type(const struct btf_dumper *d, __u32 type_id,
__u8 bit_offset, const void *data) __u8 bit_offset, const void *data)
{ {
@ -341,6 +383,10 @@ static int btf_dumper_do_type(const struct btf_dumper *d, __u32 type_id,
case BTF_KIND_CONST: case BTF_KIND_CONST:
case BTF_KIND_RESTRICT: case BTF_KIND_RESTRICT:
return btf_dumper_modifier(d, type_id, bit_offset, data); return btf_dumper_modifier(d, type_id, bit_offset, data);
case BTF_KIND_VAR:
return btf_dumper_var(d, type_id, bit_offset, data);
case BTF_KIND_DATASEC:
return btf_dumper_datasec(d, type_id, data);
default: default:
jsonw_printf(d->jw, "(unsupported-kind"); jsonw_printf(d->jw, "(unsupported-kind");
return -EINVAL; return -EINVAL;
@ -377,6 +423,7 @@ static int __btf_dumper_type_only(const struct btf *btf, __u32 type_id,
{ {
const struct btf_type *proto_type; const struct btf_type *proto_type;
const struct btf_array *array; const struct btf_array *array;
const struct btf_var *var;
const struct btf_type *t; const struct btf_type *t;
if (!type_id) { if (!type_id) {
@ -440,6 +487,18 @@ static int __btf_dumper_type_only(const struct btf *btf, __u32 type_id,
if (pos == -1) if (pos == -1)
return -1; return -1;
break; break;
case BTF_KIND_VAR:
var = (struct btf_var *)(t + 1);
if (var->linkage == BTF_VAR_STATIC)
BTF_PRINT_ARG("static ");
BTF_PRINT_TYPE(t->type);
BTF_PRINT_ARG(" %s",
btf__name_by_offset(btf, t->name_off));
break;
case BTF_KIND_DATASEC:
BTF_PRINT_ARG("section (\"%s\") ",
btf__name_by_offset(btf, t->name_off));
break;
case BTF_KIND_UNKN: case BTF_KIND_UNKN:
default: default:
return -1; return -1;

Просмотреть файл

@ -153,11 +153,13 @@ static int do_dump_btf(const struct btf_dumper *d,
/* start of key-value pair */ /* start of key-value pair */
jsonw_start_object(d->jw); jsonw_start_object(d->jw);
jsonw_name(d->jw, "key"); if (map_info->btf_key_type_id) {
jsonw_name(d->jw, "key");
ret = btf_dumper_type(d, map_info->btf_key_type_id, key); ret = btf_dumper_type(d, map_info->btf_key_type_id, key);
if (ret) if (ret)
goto err_end_obj; goto err_end_obj;
}
if (!map_is_per_cpu(map_info->type)) { if (!map_is_per_cpu(map_info->type)) {
jsonw_name(d->jw, "value"); jsonw_name(d->jw, "value");

Просмотреть файл

@ -195,6 +195,9 @@ static const char *print_imm(void *private_data,
if (insn->src_reg == BPF_PSEUDO_MAP_FD) if (insn->src_reg == BPF_PSEUDO_MAP_FD)
snprintf(dd->scratch_buff, sizeof(dd->scratch_buff), snprintf(dd->scratch_buff, sizeof(dd->scratch_buff),
"map[id:%u]", insn->imm); "map[id:%u]", insn->imm);
else if (insn->src_reg == BPF_PSEUDO_MAP_VALUE)
snprintf(dd->scratch_buff, sizeof(dd->scratch_buff),
"map[id:%u][0]+%u", insn->imm, (insn + 1)->imm);
else else
snprintf(dd->scratch_buff, sizeof(dd->scratch_buff), snprintf(dd->scratch_buff, sizeof(dd->scratch_buff),
"0x%llx", (unsigned long long)full_imm); "0x%llx", (unsigned long long)full_imm);

Просмотреть файл

@ -278,10 +278,29 @@
.off = 0, \ .off = 0, \
.imm = ((__u64) (IMM)) >> 32 }) .imm = ((__u64) (IMM)) >> 32 })
#define BPF_LD_IMM64_RAW_FULL(DST, SRC, OFF1, OFF2, IMM1, IMM2) \
((struct bpf_insn) { \
.code = BPF_LD | BPF_DW | BPF_IMM, \
.dst_reg = DST, \
.src_reg = SRC, \
.off = OFF1, \
.imm = IMM1 }), \
((struct bpf_insn) { \
.code = 0, /* zero is reserved opcode */ \
.dst_reg = 0, \
.src_reg = 0, \
.off = OFF2, \
.imm = IMM2 })
/* pseudo BPF_LD_IMM64 insn used to refer to process-local map_fd */ /* pseudo BPF_LD_IMM64 insn used to refer to process-local map_fd */
#define BPF_LD_MAP_FD(DST, MAP_FD) \ #define BPF_LD_MAP_FD(DST, MAP_FD) \
BPF_LD_IMM64_RAW(DST, BPF_PSEUDO_MAP_FD, MAP_FD) BPF_LD_IMM64_RAW_FULL(DST, BPF_PSEUDO_MAP_FD, 0, 0, \
MAP_FD, 0)
#define BPF_LD_MAP_VALUE(DST, MAP_FD, VALUE_OFF) \
BPF_LD_IMM64_RAW_FULL(DST, BPF_PSEUDO_MAP_VALUE, 0, 0, \
MAP_FD, VALUE_OFF)
/* Relative call */ /* Relative call */

Просмотреть файл

@ -105,6 +105,7 @@ enum bpf_cmd {
BPF_BTF_GET_FD_BY_ID, BPF_BTF_GET_FD_BY_ID,
BPF_TASK_FD_QUERY, BPF_TASK_FD_QUERY,
BPF_MAP_LOOKUP_AND_DELETE_ELEM, BPF_MAP_LOOKUP_AND_DELETE_ELEM,
BPF_MAP_FREEZE,
}; };
enum bpf_map_type { enum bpf_map_type {
@ -255,8 +256,19 @@ enum bpf_attach_type {
*/ */
#define BPF_F_ANY_ALIGNMENT (1U << 1) #define BPF_F_ANY_ALIGNMENT (1U << 1)
/* when bpf_ldimm64->src_reg == BPF_PSEUDO_MAP_FD, bpf_ldimm64->imm == fd */ /* When BPF ldimm64's insn[0].src_reg != 0 then this can have
* two extensions:
*
* insn[0].src_reg: BPF_PSEUDO_MAP_FD BPF_PSEUDO_MAP_VALUE
* insn[0].imm: map fd map fd
* insn[1].imm: 0 offset into value
* insn[0].off: 0 0
* insn[1].off: 0 0
* ldimm64 rewrite: address of map address of map[0]+offset
* verifier type: CONST_PTR_TO_MAP PTR_TO_MAP_VALUE
*/
#define BPF_PSEUDO_MAP_FD 1 #define BPF_PSEUDO_MAP_FD 1
#define BPF_PSEUDO_MAP_VALUE 2
/* when bpf_call->src_reg == BPF_PSEUDO_CALL, bpf_call->imm == pc-relative /* when bpf_call->src_reg == BPF_PSEUDO_CALL, bpf_call->imm == pc-relative
* offset to another bpf function * offset to another bpf function
@ -283,7 +295,7 @@ enum bpf_attach_type {
#define BPF_OBJ_NAME_LEN 16U #define BPF_OBJ_NAME_LEN 16U
/* Flags for accessing BPF object */ /* Flags for accessing BPF object from syscall side. */
#define BPF_F_RDONLY (1U << 3) #define BPF_F_RDONLY (1U << 3)
#define BPF_F_WRONLY (1U << 4) #define BPF_F_WRONLY (1U << 4)
@ -293,6 +305,10 @@ enum bpf_attach_type {
/* Zero-initialize hash function seed. This should only be used for testing. */ /* Zero-initialize hash function seed. This should only be used for testing. */
#define BPF_F_ZERO_SEED (1U << 6) #define BPF_F_ZERO_SEED (1U << 6)
/* Flags for accessing BPF object from program side. */
#define BPF_F_RDONLY_PROG (1U << 7)
#define BPF_F_WRONLY_PROG (1U << 8)
/* flags for BPF_PROG_QUERY */ /* flags for BPF_PROG_QUERY */
#define BPF_F_QUERY_EFFECTIVE (1U << 0) #define BPF_F_QUERY_EFFECTIVE (1U << 0)

Просмотреть файл

@ -39,11 +39,11 @@ struct btf_type {
* struct, union and fwd * struct, union and fwd
*/ */
__u32 info; __u32 info;
/* "size" is used by INT, ENUM, STRUCT and UNION. /* "size" is used by INT, ENUM, STRUCT, UNION and DATASEC.
* "size" tells the size of the type it is describing. * "size" tells the size of the type it is describing.
* *
* "type" is used by PTR, TYPEDEF, VOLATILE, CONST, RESTRICT, * "type" is used by PTR, TYPEDEF, VOLATILE, CONST, RESTRICT,
* FUNC and FUNC_PROTO. * FUNC, FUNC_PROTO and VAR.
* "type" is a type_id referring to another type. * "type" is a type_id referring to another type.
*/ */
union { union {
@ -70,8 +70,10 @@ struct btf_type {
#define BTF_KIND_RESTRICT 11 /* Restrict */ #define BTF_KIND_RESTRICT 11 /* Restrict */
#define BTF_KIND_FUNC 12 /* Function */ #define BTF_KIND_FUNC 12 /* Function */
#define BTF_KIND_FUNC_PROTO 13 /* Function Proto */ #define BTF_KIND_FUNC_PROTO 13 /* Function Proto */
#define BTF_KIND_MAX 13 #define BTF_KIND_VAR 14 /* Variable */
#define NR_BTF_KINDS 14 #define BTF_KIND_DATASEC 15 /* Section */
#define BTF_KIND_MAX BTF_KIND_DATASEC
#define NR_BTF_KINDS (BTF_KIND_MAX + 1)
/* For some specific BTF_KIND, "struct btf_type" is immediately /* For some specific BTF_KIND, "struct btf_type" is immediately
* followed by extra data. * followed by extra data.
@ -138,4 +140,26 @@ struct btf_param {
__u32 type; __u32 type;
}; };
enum {
BTF_VAR_STATIC = 0,
BTF_VAR_GLOBAL_ALLOCATED,
};
/* BTF_KIND_VAR is followed by a single "struct btf_var" to describe
* additional information related to the variable such as its linkage.
*/
struct btf_var {
__u32 linkage;
};
/* BTF_KIND_DATASEC is followed by multiple "struct btf_var_secinfo"
* to describe all BTF_KIND_VAR types it contains along with it's
* in-section offset as well as size.
*/
struct btf_var_secinfo {
__u32 type;
__u32 offset;
__u32 size;
};
#endif /* _UAPI__LINUX_BTF_H__ */ #endif /* _UAPI__LINUX_BTF_H__ */

Просмотреть файл

@ -3,7 +3,7 @@
BPF_VERSION = 0 BPF_VERSION = 0
BPF_PATCHLEVEL = 0 BPF_PATCHLEVEL = 0
BPF_EXTRAVERSION = 2 BPF_EXTRAVERSION = 3
MAKEFLAGS += --no-print-directory MAKEFLAGS += --no-print-directory

Просмотреть файл

@ -429,6 +429,16 @@ int bpf_map_get_next_key(int fd, const void *key, void *next_key)
return sys_bpf(BPF_MAP_GET_NEXT_KEY, &attr, sizeof(attr)); return sys_bpf(BPF_MAP_GET_NEXT_KEY, &attr, sizeof(attr));
} }
int bpf_map_freeze(int fd)
{
union bpf_attr attr;
memset(&attr, 0, sizeof(attr));
attr.map_fd = fd;
return sys_bpf(BPF_MAP_FREEZE, &attr, sizeof(attr));
}
int bpf_obj_pin(int fd, const char *pathname) int bpf_obj_pin(int fd, const char *pathname)
{ {
union bpf_attr attr; union bpf_attr attr;

Просмотреть файл

@ -117,6 +117,7 @@ LIBBPF_API int bpf_map_lookup_and_delete_elem(int fd, const void *key,
void *value); void *value);
LIBBPF_API int bpf_map_delete_elem(int fd, const void *key); LIBBPF_API int bpf_map_delete_elem(int fd, const void *key);
LIBBPF_API int bpf_map_get_next_key(int fd, const void *key, void *next_key); LIBBPF_API int bpf_map_get_next_key(int fd, const void *key, void *next_key);
LIBBPF_API int bpf_map_freeze(int fd);
LIBBPF_API int bpf_obj_pin(int fd, const char *pathname); LIBBPF_API int bpf_obj_pin(int fd, const char *pathname);
LIBBPF_API int bpf_obj_get(const char *pathname); LIBBPF_API int bpf_obj_get(const char *pathname);
LIBBPF_API int bpf_prog_attach(int prog_fd, int attachable_fd, LIBBPF_API int bpf_prog_attach(int prog_fd, int attachable_fd,

Просмотреть файл

@ -24,6 +24,8 @@
((k) == BTF_KIND_CONST) || \ ((k) == BTF_KIND_CONST) || \
((k) == BTF_KIND_RESTRICT)) ((k) == BTF_KIND_RESTRICT))
#define IS_VAR(k) ((k) == BTF_KIND_VAR)
static struct btf_type btf_void; static struct btf_type btf_void;
struct btf { struct btf {
@ -212,6 +214,10 @@ static int btf_type_size(struct btf_type *t)
return base_size + vlen * sizeof(struct btf_member); return base_size + vlen * sizeof(struct btf_member);
case BTF_KIND_FUNC_PROTO: case BTF_KIND_FUNC_PROTO:
return base_size + vlen * sizeof(struct btf_param); return base_size + vlen * sizeof(struct btf_param);
case BTF_KIND_VAR:
return base_size + sizeof(struct btf_var);
case BTF_KIND_DATASEC:
return base_size + vlen * sizeof(struct btf_var_secinfo);
default: default:
pr_debug("Unsupported BTF_KIND:%u\n", BTF_INFO_KIND(t->info)); pr_debug("Unsupported BTF_KIND:%u\n", BTF_INFO_KIND(t->info));
return -EINVAL; return -EINVAL;
@ -283,6 +289,7 @@ __s64 btf__resolve_size(const struct btf *btf, __u32 type_id)
case BTF_KIND_STRUCT: case BTF_KIND_STRUCT:
case BTF_KIND_UNION: case BTF_KIND_UNION:
case BTF_KIND_ENUM: case BTF_KIND_ENUM:
case BTF_KIND_DATASEC:
size = t->size; size = t->size;
goto done; goto done;
case BTF_KIND_PTR: case BTF_KIND_PTR:
@ -292,6 +299,7 @@ __s64 btf__resolve_size(const struct btf *btf, __u32 type_id)
case BTF_KIND_VOLATILE: case BTF_KIND_VOLATILE:
case BTF_KIND_CONST: case BTF_KIND_CONST:
case BTF_KIND_RESTRICT: case BTF_KIND_RESTRICT:
case BTF_KIND_VAR:
type_id = t->type; type_id = t->type;
break; break;
case BTF_KIND_ARRAY: case BTF_KIND_ARRAY:
@ -326,7 +334,8 @@ int btf__resolve_type(const struct btf *btf, __u32 type_id)
t = btf__type_by_id(btf, type_id); t = btf__type_by_id(btf, type_id);
while (depth < MAX_RESOLVE_DEPTH && while (depth < MAX_RESOLVE_DEPTH &&
!btf_type_is_void_or_null(t) && !btf_type_is_void_or_null(t) &&
IS_MODIFIER(BTF_INFO_KIND(t->info))) { (IS_MODIFIER(BTF_INFO_KIND(t->info)) ||
IS_VAR(BTF_INFO_KIND(t->info)))) {
type_id = t->type; type_id = t->type;
t = btf__type_by_id(btf, type_id); t = btf__type_by_id(btf, type_id);
depth++; depth++;
@ -408,6 +417,92 @@ done:
return btf; return btf;
} }
static int compare_vsi_off(const void *_a, const void *_b)
{
const struct btf_var_secinfo *a = _a;
const struct btf_var_secinfo *b = _b;
return a->offset - b->offset;
}
static int btf_fixup_datasec(struct bpf_object *obj, struct btf *btf,
struct btf_type *t)
{
__u32 size = 0, off = 0, i, vars = BTF_INFO_VLEN(t->info);
const char *name = btf__name_by_offset(btf, t->name_off);
const struct btf_type *t_var;
struct btf_var_secinfo *vsi;
struct btf_var *var;
int ret;
if (!name) {
pr_debug("No name found in string section for DATASEC kind.\n");
return -ENOENT;
}
ret = bpf_object__section_size(obj, name, &size);
if (ret || !size || (t->size && t->size != size)) {
pr_debug("Invalid size for section %s: %u bytes\n", name, size);
return -ENOENT;
}
t->size = size;
for (i = 0, vsi = (struct btf_var_secinfo *)(t + 1);
i < vars; i++, vsi++) {
t_var = btf__type_by_id(btf, vsi->type);
var = (struct btf_var *)(t_var + 1);
if (BTF_INFO_KIND(t_var->info) != BTF_KIND_VAR) {
pr_debug("Non-VAR type seen in section %s\n", name);
return -EINVAL;
}
if (var->linkage == BTF_VAR_STATIC)
continue;
name = btf__name_by_offset(btf, t_var->name_off);
if (!name) {
pr_debug("No name found in string section for VAR kind\n");
return -ENOENT;
}
ret = bpf_object__variable_offset(obj, name, &off);
if (ret) {
pr_debug("No offset found in symbol table for VAR %s\n", name);
return -ENOENT;
}
vsi->offset = off;
}
qsort(t + 1, vars, sizeof(*vsi), compare_vsi_off);
return 0;
}
int btf__finalize_data(struct bpf_object *obj, struct btf *btf)
{
int err = 0;
__u32 i;
for (i = 1; i <= btf->nr_types; i++) {
struct btf_type *t = btf->types[i];
/* Loader needs to fix up some of the things compiler
* couldn't get its hands on while emitting BTF. This
* is section size and global variable offset. We use
* the info from the ELF itself for this purpose.
*/
if (BTF_INFO_KIND(t->info) == BTF_KIND_DATASEC) {
err = btf_fixup_datasec(obj, btf, t);
if (err)
break;
}
}
return err;
}
int btf__load(struct btf *btf) int btf__load(struct btf *btf)
{ {
__u32 log_buf_size = BPF_LOG_BUF_SIZE; __u32 log_buf_size = BPF_LOG_BUF_SIZE;

Просмотреть файл

@ -21,6 +21,8 @@ struct btf;
struct btf_ext; struct btf_ext;
struct btf_type; struct btf_type;
struct bpf_object;
/* /*
* The .BTF.ext ELF section layout defined as * The .BTF.ext ELF section layout defined as
* struct btf_ext_header * struct btf_ext_header
@ -57,6 +59,7 @@ struct btf_ext_header {
LIBBPF_API void btf__free(struct btf *btf); LIBBPF_API void btf__free(struct btf *btf);
LIBBPF_API struct btf *btf__new(__u8 *data, __u32 size); LIBBPF_API struct btf *btf__new(__u8 *data, __u32 size);
LIBBPF_API int btf__finalize_data(struct bpf_object *obj, struct btf *btf);
LIBBPF_API int btf__load(struct btf *btf); LIBBPF_API int btf__load(struct btf *btf);
LIBBPF_API __s32 btf__find_by_name(const struct btf *btf, LIBBPF_API __s32 btf__find_by_name(const struct btf *btf,
const char *type_name); const char *type_name);

Просмотреть файл

@ -7,6 +7,7 @@
* Copyright (C) 2015 Wang Nan <wangnan0@huawei.com> * Copyright (C) 2015 Wang Nan <wangnan0@huawei.com>
* Copyright (C) 2015 Huawei Inc. * Copyright (C) 2015 Huawei Inc.
* Copyright (C) 2017 Nicira, Inc. * Copyright (C) 2017 Nicira, Inc.
* Copyright (C) 2019 Isovalent, Inc.
*/ */
#ifndef _GNU_SOURCE #ifndef _GNU_SOURCE
@ -149,6 +150,7 @@ struct bpf_program {
enum { enum {
RELO_LD64, RELO_LD64,
RELO_CALL, RELO_CALL,
RELO_DATA,
} type; } type;
int insn_idx; int insn_idx;
union { union {
@ -182,6 +184,19 @@ struct bpf_program {
__u32 line_info_cnt; __u32 line_info_cnt;
}; };
enum libbpf_map_type {
LIBBPF_MAP_UNSPEC,
LIBBPF_MAP_DATA,
LIBBPF_MAP_BSS,
LIBBPF_MAP_RODATA,
};
static const char * const libbpf_type_to_btf_name[] = {
[LIBBPF_MAP_DATA] = ".data",
[LIBBPF_MAP_BSS] = ".bss",
[LIBBPF_MAP_RODATA] = ".rodata",
};
struct bpf_map { struct bpf_map {
int fd; int fd;
char *name; char *name;
@ -193,11 +208,18 @@ struct bpf_map {
__u32 btf_value_type_id; __u32 btf_value_type_id;
void *priv; void *priv;
bpf_map_clear_priv_t clear_priv; bpf_map_clear_priv_t clear_priv;
enum libbpf_map_type libbpf_type;
};
struct bpf_secdata {
void *rodata;
void *data;
}; };
static LIST_HEAD(bpf_objects_list); static LIST_HEAD(bpf_objects_list);
struct bpf_object { struct bpf_object {
char name[BPF_OBJ_NAME_LEN];
char license[64]; char license[64];
__u32 kern_version; __u32 kern_version;
@ -205,6 +227,7 @@ struct bpf_object {
size_t nr_programs; size_t nr_programs;
struct bpf_map *maps; struct bpf_map *maps;
size_t nr_maps; size_t nr_maps;
struct bpf_secdata sections;
bool loaded; bool loaded;
bool has_pseudo_calls; bool has_pseudo_calls;
@ -220,6 +243,9 @@ struct bpf_object {
Elf *elf; Elf *elf;
GElf_Ehdr ehdr; GElf_Ehdr ehdr;
Elf_Data *symbols; Elf_Data *symbols;
Elf_Data *data;
Elf_Data *rodata;
Elf_Data *bss;
size_t strtabidx; size_t strtabidx;
struct { struct {
GElf_Shdr shdr; GElf_Shdr shdr;
@ -228,6 +254,9 @@ struct bpf_object {
int nr_reloc; int nr_reloc;
int maps_shndx; int maps_shndx;
int text_shndx; int text_shndx;
int data_shndx;
int rodata_shndx;
int bss_shndx;
} efile; } efile;
/* /*
* All loaded bpf_object is linked in a list, which is * All loaded bpf_object is linked in a list, which is
@ -449,6 +478,7 @@ static struct bpf_object *bpf_object__new(const char *path,
size_t obj_buf_sz) size_t obj_buf_sz)
{ {
struct bpf_object *obj; struct bpf_object *obj;
char *end;
obj = calloc(1, sizeof(struct bpf_object) + strlen(path) + 1); obj = calloc(1, sizeof(struct bpf_object) + strlen(path) + 1);
if (!obj) { if (!obj) {
@ -457,8 +487,14 @@ static struct bpf_object *bpf_object__new(const char *path,
} }
strcpy(obj->path, path); strcpy(obj->path, path);
obj->efile.fd = -1; /* Using basename() GNU version which doesn't modify arg. */
strncpy(obj->name, basename((void *)path),
sizeof(obj->name) - 1);
end = strchr(obj->name, '.');
if (end)
*end = 0;
obj->efile.fd = -1;
/* /*
* Caller of this function should also calls * Caller of this function should also calls
* bpf_object__elf_finish() after data collection to return * bpf_object__elf_finish() after data collection to return
@ -468,6 +504,9 @@ static struct bpf_object *bpf_object__new(const char *path,
obj->efile.obj_buf = obj_buf; obj->efile.obj_buf = obj_buf;
obj->efile.obj_buf_sz = obj_buf_sz; obj->efile.obj_buf_sz = obj_buf_sz;
obj->efile.maps_shndx = -1; obj->efile.maps_shndx = -1;
obj->efile.data_shndx = -1;
obj->efile.rodata_shndx = -1;
obj->efile.bss_shndx = -1;
obj->loaded = false; obj->loaded = false;
@ -486,6 +525,9 @@ static void bpf_object__elf_finish(struct bpf_object *obj)
obj->efile.elf = NULL; obj->efile.elf = NULL;
} }
obj->efile.symbols = NULL; obj->efile.symbols = NULL;
obj->efile.data = NULL;
obj->efile.rodata = NULL;
obj->efile.bss = NULL;
zfree(&obj->efile.reloc); zfree(&obj->efile.reloc);
obj->efile.nr_reloc = 0; obj->efile.nr_reloc = 0;
@ -627,27 +669,182 @@ static bool bpf_map_type__is_map_in_map(enum bpf_map_type type)
return false; return false;
} }
static int bpf_object_search_section_size(const struct bpf_object *obj,
const char *name, size_t *d_size)
{
const GElf_Ehdr *ep = &obj->efile.ehdr;
Elf *elf = obj->efile.elf;
Elf_Scn *scn = NULL;
int idx = 0;
while ((scn = elf_nextscn(elf, scn)) != NULL) {
const char *sec_name;
Elf_Data *data;
GElf_Shdr sh;
idx++;
if (gelf_getshdr(scn, &sh) != &sh) {
pr_warning("failed to get section(%d) header from %s\n",
idx, obj->path);
return -EIO;
}
sec_name = elf_strptr(elf, ep->e_shstrndx, sh.sh_name);
if (!sec_name) {
pr_warning("failed to get section(%d) name from %s\n",
idx, obj->path);
return -EIO;
}
if (strcmp(name, sec_name))
continue;
data = elf_getdata(scn, 0);
if (!data) {
pr_warning("failed to get section(%d) data from %s(%s)\n",
idx, name, obj->path);
return -EIO;
}
*d_size = data->d_size;
return 0;
}
return -ENOENT;
}
int bpf_object__section_size(const struct bpf_object *obj, const char *name,
__u32 *size)
{
int ret = -ENOENT;
size_t d_size;
*size = 0;
if (!name) {
return -EINVAL;
} else if (!strcmp(name, ".data")) {
if (obj->efile.data)
*size = obj->efile.data->d_size;
} else if (!strcmp(name, ".bss")) {
if (obj->efile.bss)
*size = obj->efile.bss->d_size;
} else if (!strcmp(name, ".rodata")) {
if (obj->efile.rodata)
*size = obj->efile.rodata->d_size;
} else {
ret = bpf_object_search_section_size(obj, name, &d_size);
if (!ret)
*size = d_size;
}
return *size ? 0 : ret;
}
int bpf_object__variable_offset(const struct bpf_object *obj, const char *name,
__u32 *off)
{
Elf_Data *symbols = obj->efile.symbols;
const char *sname;
size_t si;
if (!name || !off)
return -EINVAL;
for (si = 0; si < symbols->d_size / sizeof(GElf_Sym); si++) {
GElf_Sym sym;
if (!gelf_getsym(symbols, si, &sym))
continue;
if (GELF_ST_BIND(sym.st_info) != STB_GLOBAL ||
GELF_ST_TYPE(sym.st_info) != STT_OBJECT)
continue;
sname = elf_strptr(obj->efile.elf, obj->efile.strtabidx,
sym.st_name);
if (!sname) {
pr_warning("failed to get sym name string for var %s\n",
name);
return -EIO;
}
if (strcmp(name, sname) == 0) {
*off = sym.st_value;
return 0;
}
}
return -ENOENT;
}
static bool bpf_object__has_maps(const struct bpf_object *obj)
{
return obj->efile.maps_shndx >= 0 ||
obj->efile.data_shndx >= 0 ||
obj->efile.rodata_shndx >= 0 ||
obj->efile.bss_shndx >= 0;
}
static int
bpf_object__init_internal_map(struct bpf_object *obj, struct bpf_map *map,
enum libbpf_map_type type, Elf_Data *data,
void **data_buff)
{
struct bpf_map_def *def = &map->def;
char map_name[BPF_OBJ_NAME_LEN];
map->libbpf_type = type;
map->offset = ~(typeof(map->offset))0;
snprintf(map_name, sizeof(map_name), "%.8s%.7s", obj->name,
libbpf_type_to_btf_name[type]);
map->name = strdup(map_name);
if (!map->name) {
pr_warning("failed to alloc map name\n");
return -ENOMEM;
}
def->type = BPF_MAP_TYPE_ARRAY;
def->key_size = sizeof(int);
def->value_size = data->d_size;
def->max_entries = 1;
def->map_flags = type == LIBBPF_MAP_RODATA ?
BPF_F_RDONLY_PROG : 0;
if (data_buff) {
*data_buff = malloc(data->d_size);
if (!*data_buff) {
zfree(&map->name);
pr_warning("failed to alloc map content buffer\n");
return -ENOMEM;
}
memcpy(*data_buff, data->d_buf, data->d_size);
}
pr_debug("map %ld is \"%s\"\n", map - obj->maps, map->name);
return 0;
}
static int static int
bpf_object__init_maps(struct bpf_object *obj, int flags) bpf_object__init_maps(struct bpf_object *obj, int flags)
{ {
int i, map_idx, map_def_sz, nr_syms, nr_maps = 0, nr_maps_glob = 0;
bool strict = !(flags & MAPS_RELAX_COMPAT); bool strict = !(flags & MAPS_RELAX_COMPAT);
int i, map_idx, map_def_sz, nr_maps = 0;
Elf_Scn *scn;
Elf_Data *data = NULL;
Elf_Data *symbols = obj->efile.symbols; Elf_Data *symbols = obj->efile.symbols;
Elf_Data *data = NULL;
int ret = 0;
if (obj->efile.maps_shndx < 0)
return -EINVAL;
if (!symbols) if (!symbols)
return -EINVAL; return -EINVAL;
nr_syms = symbols->d_size / sizeof(GElf_Sym);
scn = elf_getscn(obj->efile.elf, obj->efile.maps_shndx); if (obj->efile.maps_shndx >= 0) {
if (scn) Elf_Scn *scn = elf_getscn(obj->efile.elf,
data = elf_getdata(scn, NULL); obj->efile.maps_shndx);
if (!scn || !data) {
pr_warning("failed to get Elf_Data from map section %d\n", if (scn)
obj->efile.maps_shndx); data = elf_getdata(scn, NULL);
return -EINVAL; if (!scn || !data) {
pr_warning("failed to get Elf_Data from map section %d\n",
obj->efile.maps_shndx);
return -EINVAL;
}
} }
/* /*
@ -657,7 +854,13 @@ bpf_object__init_maps(struct bpf_object *obj, int flags)
* *
* TODO: Detect array of map and report error. * TODO: Detect array of map and report error.
*/ */
for (i = 0; i < symbols->d_size / sizeof(GElf_Sym); i++) { if (obj->efile.data_shndx >= 0)
nr_maps_glob++;
if (obj->efile.rodata_shndx >= 0)
nr_maps_glob++;
if (obj->efile.bss_shndx >= 0)
nr_maps_glob++;
for (i = 0; data && i < nr_syms; i++) {
GElf_Sym sym; GElf_Sym sym;
if (!gelf_getsym(symbols, i, &sym)) if (!gelf_getsym(symbols, i, &sym))
@ -670,19 +873,21 @@ bpf_object__init_maps(struct bpf_object *obj, int flags)
/* Alloc obj->maps and fill nr_maps. */ /* Alloc obj->maps and fill nr_maps. */
pr_debug("maps in %s: %d maps in %zd bytes\n", obj->path, pr_debug("maps in %s: %d maps in %zd bytes\n", obj->path,
nr_maps, data->d_size); nr_maps, data->d_size);
if (!nr_maps && !nr_maps_glob)
if (!nr_maps)
return 0; return 0;
/* Assume equally sized map definitions */ /* Assume equally sized map definitions */
map_def_sz = data->d_size / nr_maps; if (data) {
if (!data->d_size || (data->d_size % nr_maps) != 0) { map_def_sz = data->d_size / nr_maps;
pr_warning("unable to determine map definition size " if (!data->d_size || (data->d_size % nr_maps) != 0) {
"section %s, %d maps in %zd bytes\n", pr_warning("unable to determine map definition size "
obj->path, nr_maps, data->d_size); "section %s, %d maps in %zd bytes\n",
return -EINVAL; obj->path, nr_maps, data->d_size);
return -EINVAL;
}
} }
nr_maps += nr_maps_glob;
obj->maps = calloc(nr_maps, sizeof(obj->maps[0])); obj->maps = calloc(nr_maps, sizeof(obj->maps[0]));
if (!obj->maps) { if (!obj->maps) {
pr_warning("alloc maps for object failed\n"); pr_warning("alloc maps for object failed\n");
@ -703,7 +908,7 @@ bpf_object__init_maps(struct bpf_object *obj, int flags)
/* /*
* Fill obj->maps using data in "maps" section. * Fill obj->maps using data in "maps" section.
*/ */
for (i = 0, map_idx = 0; i < symbols->d_size / sizeof(GElf_Sym); i++) { for (i = 0, map_idx = 0; data && i < nr_syms; i++) {
GElf_Sym sym; GElf_Sym sym;
const char *map_name; const char *map_name;
struct bpf_map_def *def; struct bpf_map_def *def;
@ -716,6 +921,8 @@ bpf_object__init_maps(struct bpf_object *obj, int flags)
map_name = elf_strptr(obj->efile.elf, map_name = elf_strptr(obj->efile.elf,
obj->efile.strtabidx, obj->efile.strtabidx,
sym.st_name); sym.st_name);
obj->maps[map_idx].libbpf_type = LIBBPF_MAP_UNSPEC;
obj->maps[map_idx].offset = sym.st_value; obj->maps[map_idx].offset = sym.st_value;
if (sym.st_value + map_def_sz > data->d_size) { if (sym.st_value + map_def_sz > data->d_size) {
pr_warning("corrupted maps section in %s: last map \"%s\" too small\n", pr_warning("corrupted maps section in %s: last map \"%s\" too small\n",
@ -764,8 +971,27 @@ bpf_object__init_maps(struct bpf_object *obj, int flags)
map_idx++; map_idx++;
} }
qsort(obj->maps, obj->nr_maps, sizeof(obj->maps[0]), compare_bpf_map); /*
return 0; * Populate rest of obj->maps with libbpf internal maps.
*/
if (obj->efile.data_shndx >= 0)
ret = bpf_object__init_internal_map(obj, &obj->maps[map_idx++],
LIBBPF_MAP_DATA,
obj->efile.data,
&obj->sections.data);
if (!ret && obj->efile.rodata_shndx >= 0)
ret = bpf_object__init_internal_map(obj, &obj->maps[map_idx++],
LIBBPF_MAP_RODATA,
obj->efile.rodata,
&obj->sections.rodata);
if (!ret && obj->efile.bss_shndx >= 0)
ret = bpf_object__init_internal_map(obj, &obj->maps[map_idx++],
LIBBPF_MAP_BSS,
obj->efile.bss, NULL);
if (!ret)
qsort(obj->maps, obj->nr_maps, sizeof(obj->maps[0]),
compare_bpf_map);
return ret;
} }
static bool section_have_execinstr(struct bpf_object *obj, int idx) static bool section_have_execinstr(struct bpf_object *obj, int idx)
@ -791,6 +1017,7 @@ static int bpf_object__elf_collect(struct bpf_object *obj, int flags)
Elf *elf = obj->efile.elf; Elf *elf = obj->efile.elf;
GElf_Ehdr *ep = &obj->efile.ehdr; GElf_Ehdr *ep = &obj->efile.ehdr;
Elf_Data *btf_ext_data = NULL; Elf_Data *btf_ext_data = NULL;
Elf_Data *btf_data = NULL;
Elf_Scn *scn = NULL; Elf_Scn *scn = NULL;
int idx = 0, err = 0; int idx = 0, err = 0;
@ -834,32 +1061,18 @@ static int bpf_object__elf_collect(struct bpf_object *obj, int flags)
(int)sh.sh_link, (unsigned long)sh.sh_flags, (int)sh.sh_link, (unsigned long)sh.sh_flags,
(int)sh.sh_type); (int)sh.sh_type);
if (strcmp(name, "license") == 0) if (strcmp(name, "license") == 0) {
err = bpf_object__init_license(obj, err = bpf_object__init_license(obj,
data->d_buf, data->d_buf,
data->d_size); data->d_size);
else if (strcmp(name, "version") == 0) } else if (strcmp(name, "version") == 0) {
err = bpf_object__init_kversion(obj, err = bpf_object__init_kversion(obj,
data->d_buf, data->d_buf,
data->d_size); data->d_size);
else if (strcmp(name, "maps") == 0) } else if (strcmp(name, "maps") == 0) {
obj->efile.maps_shndx = idx; obj->efile.maps_shndx = idx;
else if (strcmp(name, BTF_ELF_SEC) == 0) { } else if (strcmp(name, BTF_ELF_SEC) == 0) {
obj->btf = btf__new(data->d_buf, data->d_size); btf_data = data;
if (IS_ERR(obj->btf)) {
pr_warning("Error loading ELF section %s: %ld. Ignored and continue.\n",
BTF_ELF_SEC, PTR_ERR(obj->btf));
obj->btf = NULL;
continue;
}
err = btf__load(obj->btf);
if (err) {
pr_warning("Error loading %s into kernel: %d. Ignored and continue.\n",
BTF_ELF_SEC, err);
btf__free(obj->btf);
obj->btf = NULL;
err = 0;
}
} else if (strcmp(name, BTF_EXT_ELF_SEC) == 0) { } else if (strcmp(name, BTF_EXT_ELF_SEC) == 0) {
btf_ext_data = data; btf_ext_data = data;
} else if (sh.sh_type == SHT_SYMTAB) { } else if (sh.sh_type == SHT_SYMTAB) {
@ -871,20 +1084,28 @@ static int bpf_object__elf_collect(struct bpf_object *obj, int flags)
obj->efile.symbols = data; obj->efile.symbols = data;
obj->efile.strtabidx = sh.sh_link; obj->efile.strtabidx = sh.sh_link;
} }
} else if ((sh.sh_type == SHT_PROGBITS) && } else if (sh.sh_type == SHT_PROGBITS && data->d_size > 0) {
(sh.sh_flags & SHF_EXECINSTR) && if (sh.sh_flags & SHF_EXECINSTR) {
(data->d_size > 0)) { if (strcmp(name, ".text") == 0)
if (strcmp(name, ".text") == 0) obj->efile.text_shndx = idx;
obj->efile.text_shndx = idx; err = bpf_object__add_program(obj, data->d_buf,
err = bpf_object__add_program(obj, data->d_buf, data->d_size, name, idx);
data->d_size, name, idx); if (err) {
if (err) { char errmsg[STRERR_BUFSIZE];
char errmsg[STRERR_BUFSIZE]; char *cp = libbpf_strerror_r(-err, errmsg,
char *cp = libbpf_strerror_r(-err, errmsg, sizeof(errmsg));
sizeof(errmsg));
pr_warning("failed to alloc program %s (%s): %s", pr_warning("failed to alloc program %s (%s): %s",
name, obj->path, cp); name, obj->path, cp);
}
} else if (strcmp(name, ".data") == 0) {
obj->efile.data = data;
obj->efile.data_shndx = idx;
} else if (strcmp(name, ".rodata") == 0) {
obj->efile.rodata = data;
obj->efile.rodata_shndx = idx;
} else {
pr_debug("skip section(%d) %s\n", idx, name);
} }
} else if (sh.sh_type == SHT_REL) { } else if (sh.sh_type == SHT_REL) {
void *reloc = obj->efile.reloc; void *reloc = obj->efile.reloc;
@ -912,6 +1133,9 @@ static int bpf_object__elf_collect(struct bpf_object *obj, int flags)
obj->efile.reloc[n].shdr = sh; obj->efile.reloc[n].shdr = sh;
obj->efile.reloc[n].data = data; obj->efile.reloc[n].data = data;
} }
} else if (sh.sh_type == SHT_NOBITS && strcmp(name, ".bss") == 0) {
obj->efile.bss = data;
obj->efile.bss_shndx = idx;
} else { } else {
pr_debug("skip section(%d) %s\n", idx, name); pr_debug("skip section(%d) %s\n", idx, name);
} }
@ -923,6 +1147,25 @@ static int bpf_object__elf_collect(struct bpf_object *obj, int flags)
pr_warning("Corrupted ELF file: index of strtab invalid\n"); pr_warning("Corrupted ELF file: index of strtab invalid\n");
return LIBBPF_ERRNO__FORMAT; return LIBBPF_ERRNO__FORMAT;
} }
if (btf_data) {
obj->btf = btf__new(btf_data->d_buf, btf_data->d_size);
if (IS_ERR(obj->btf)) {
pr_warning("Error loading ELF section %s: %ld. Ignored and continue.\n",
BTF_ELF_SEC, PTR_ERR(obj->btf));
obj->btf = NULL;
} else {
err = btf__finalize_data(obj, obj->btf);
if (!err)
err = btf__load(obj->btf);
if (err) {
pr_warning("Error finalizing and loading %s into kernel: %d. Ignored and continue.\n",
BTF_ELF_SEC, err);
btf__free(obj->btf);
obj->btf = NULL;
err = 0;
}
}
}
if (btf_ext_data) { if (btf_ext_data) {
if (!obj->btf) { if (!obj->btf) {
pr_debug("Ignore ELF section %s because its depending ELF section %s is not found.\n", pr_debug("Ignore ELF section %s because its depending ELF section %s is not found.\n",
@ -938,7 +1181,7 @@ static int bpf_object__elf_collect(struct bpf_object *obj, int flags)
} }
} }
} }
if (obj->efile.maps_shndx >= 0) { if (bpf_object__has_maps(obj)) {
err = bpf_object__init_maps(obj, flags); err = bpf_object__init_maps(obj, flags);
if (err) if (err)
goto out; goto out;
@ -974,13 +1217,46 @@ bpf_object__find_program_by_title(struct bpf_object *obj, const char *title)
return NULL; return NULL;
} }
static bool bpf_object__shndx_is_data(const struct bpf_object *obj,
int shndx)
{
return shndx == obj->efile.data_shndx ||
shndx == obj->efile.bss_shndx ||
shndx == obj->efile.rodata_shndx;
}
static bool bpf_object__shndx_is_maps(const struct bpf_object *obj,
int shndx)
{
return shndx == obj->efile.maps_shndx;
}
static bool bpf_object__relo_in_known_section(const struct bpf_object *obj,
int shndx)
{
return shndx == obj->efile.text_shndx ||
bpf_object__shndx_is_maps(obj, shndx) ||
bpf_object__shndx_is_data(obj, shndx);
}
static enum libbpf_map_type
bpf_object__section_to_libbpf_map_type(const struct bpf_object *obj, int shndx)
{
if (shndx == obj->efile.data_shndx)
return LIBBPF_MAP_DATA;
else if (shndx == obj->efile.bss_shndx)
return LIBBPF_MAP_BSS;
else if (shndx == obj->efile.rodata_shndx)
return LIBBPF_MAP_RODATA;
else
return LIBBPF_MAP_UNSPEC;
}
static int static int
bpf_program__collect_reloc(struct bpf_program *prog, GElf_Shdr *shdr, bpf_program__collect_reloc(struct bpf_program *prog, GElf_Shdr *shdr,
Elf_Data *data, struct bpf_object *obj) Elf_Data *data, struct bpf_object *obj)
{ {
Elf_Data *symbols = obj->efile.symbols; Elf_Data *symbols = obj->efile.symbols;
int text_shndx = obj->efile.text_shndx;
int maps_shndx = obj->efile.maps_shndx;
struct bpf_map *maps = obj->maps; struct bpf_map *maps = obj->maps;
size_t nr_maps = obj->nr_maps; size_t nr_maps = obj->nr_maps;
int i, nrels; int i, nrels;
@ -1000,7 +1276,10 @@ bpf_program__collect_reloc(struct bpf_program *prog, GElf_Shdr *shdr,
GElf_Sym sym; GElf_Sym sym;
GElf_Rel rel; GElf_Rel rel;
unsigned int insn_idx; unsigned int insn_idx;
unsigned int shdr_idx;
struct bpf_insn *insns = prog->insns; struct bpf_insn *insns = prog->insns;
enum libbpf_map_type type;
const char *name;
size_t map_idx; size_t map_idx;
if (!gelf_getrel(data, i, &rel)) { if (!gelf_getrel(data, i, &rel)) {
@ -1015,13 +1294,18 @@ bpf_program__collect_reloc(struct bpf_program *prog, GElf_Shdr *shdr,
GELF_R_SYM(rel.r_info)); GELF_R_SYM(rel.r_info));
return -LIBBPF_ERRNO__FORMAT; return -LIBBPF_ERRNO__FORMAT;
} }
pr_debug("relo for %lld value %lld name %d\n",
(long long) (rel.r_info >> 32),
(long long) sym.st_value, sym.st_name);
if (sym.st_shndx != maps_shndx && sym.st_shndx != text_shndx) { name = elf_strptr(obj->efile.elf, obj->efile.strtabidx,
pr_warning("Program '%s' contains non-map related relo data pointing to section %u\n", sym.st_name) ? : "<?>";
prog->section_name, sym.st_shndx);
pr_debug("relo for %lld value %lld name %d (\'%s\')\n",
(long long) (rel.r_info >> 32),
(long long) sym.st_value, sym.st_name, name);
shdr_idx = sym.st_shndx;
if (!bpf_object__relo_in_known_section(obj, shdr_idx)) {
pr_warning("Program '%s' contains unrecognized relo data pointing to section %u\n",
prog->section_name, shdr_idx);
return -LIBBPF_ERRNO__RELOC; return -LIBBPF_ERRNO__RELOC;
} }
@ -1046,24 +1330,39 @@ bpf_program__collect_reloc(struct bpf_program *prog, GElf_Shdr *shdr,
return -LIBBPF_ERRNO__RELOC; return -LIBBPF_ERRNO__RELOC;
} }
/* TODO: 'maps' is sorted. We can use bsearch to make it faster. */ if (bpf_object__shndx_is_maps(obj, shdr_idx) ||
for (map_idx = 0; map_idx < nr_maps; map_idx++) { bpf_object__shndx_is_data(obj, shdr_idx)) {
if (maps[map_idx].offset == sym.st_value) { type = bpf_object__section_to_libbpf_map_type(obj, shdr_idx);
pr_debug("relocation: find map %zd (%s) for insn %u\n", if (type != LIBBPF_MAP_UNSPEC &&
map_idx, maps[map_idx].name, insn_idx); GELF_ST_BIND(sym.st_info) == STB_GLOBAL) {
break; pr_warning("bpf: relocation: not yet supported relo for non-static global \'%s\' variable found in insns[%d].code 0x%x\n",
name, insn_idx, insns[insn_idx].code);
return -LIBBPF_ERRNO__RELOC;
} }
}
if (map_idx >= nr_maps) { for (map_idx = 0; map_idx < nr_maps; map_idx++) {
pr_warning("bpf relocation: map_idx %d large than %d\n", if (maps[map_idx].libbpf_type != type)
(int)map_idx, (int)nr_maps - 1); continue;
return -LIBBPF_ERRNO__RELOC; if (type != LIBBPF_MAP_UNSPEC ||
} (type == LIBBPF_MAP_UNSPEC &&
maps[map_idx].offset == sym.st_value)) {
pr_debug("relocation: find map %zd (%s) for insn %u\n",
map_idx, maps[map_idx].name, insn_idx);
break;
}
}
prog->reloc_desc[i].type = RELO_LD64; if (map_idx >= nr_maps) {
prog->reloc_desc[i].insn_idx = insn_idx; pr_warning("bpf relocation: map_idx %d large than %d\n",
prog->reloc_desc[i].map_idx = map_idx; (int)map_idx, (int)nr_maps - 1);
return -LIBBPF_ERRNO__RELOC;
}
prog->reloc_desc[i].type = type != LIBBPF_MAP_UNSPEC ?
RELO_DATA : RELO_LD64;
prog->reloc_desc[i].insn_idx = insn_idx;
prog->reloc_desc[i].map_idx = map_idx;
}
} }
return 0; return 0;
} }
@ -1071,18 +1370,27 @@ bpf_program__collect_reloc(struct bpf_program *prog, GElf_Shdr *shdr,
static int bpf_map_find_btf_info(struct bpf_map *map, const struct btf *btf) static int bpf_map_find_btf_info(struct bpf_map *map, const struct btf *btf)
{ {
struct bpf_map_def *def = &map->def; struct bpf_map_def *def = &map->def;
__u32 key_type_id, value_type_id; __u32 key_type_id = 0, value_type_id = 0;
int ret; int ret;
ret = btf__get_map_kv_tids(btf, map->name, def->key_size, if (!bpf_map__is_internal(map)) {
def->value_size, &key_type_id, ret = btf__get_map_kv_tids(btf, map->name, def->key_size,
&value_type_id); def->value_size, &key_type_id,
if (ret) &value_type_id);
} else {
/*
* LLVM annotates global data differently in BTF, that is,
* only as '.data', '.bss' or '.rodata'.
*/
ret = btf__find_by_name(btf,
libbpf_type_to_btf_name[map->libbpf_type]);
}
if (ret < 0)
return ret; return ret;
map->btf_key_type_id = key_type_id; map->btf_key_type_id = key_type_id;
map->btf_value_type_id = value_type_id; map->btf_value_type_id = bpf_map__is_internal(map) ?
ret : value_type_id;
return 0; return 0;
} }
@ -1193,6 +1501,34 @@ bpf_object__probe_caps(struct bpf_object *obj)
return bpf_object__probe_name(obj); return bpf_object__probe_name(obj);
} }
static int
bpf_object__populate_internal_map(struct bpf_object *obj, struct bpf_map *map)
{
char *cp, errmsg[STRERR_BUFSIZE];
int err, zero = 0;
__u8 *data;
/* Nothing to do here since kernel already zero-initializes .bss map. */
if (map->libbpf_type == LIBBPF_MAP_BSS)
return 0;
data = map->libbpf_type == LIBBPF_MAP_DATA ?
obj->sections.data : obj->sections.rodata;
err = bpf_map_update_elem(map->fd, &zero, data, 0);
/* Freeze .rodata map as read-only from syscall side. */
if (!err && map->libbpf_type == LIBBPF_MAP_RODATA) {
err = bpf_map_freeze(map->fd);
if (err) {
cp = libbpf_strerror_r(errno, errmsg, sizeof(errmsg));
pr_warning("Error freezing map(%s) as read-only: %s\n",
map->name, cp);
err = 0;
}
}
return err;
}
static int static int
bpf_object__create_maps(struct bpf_object *obj) bpf_object__create_maps(struct bpf_object *obj)
{ {
@ -1250,6 +1586,7 @@ bpf_object__create_maps(struct bpf_object *obj)
size_t j; size_t j;
err = *pfd; err = *pfd;
err_out:
cp = libbpf_strerror_r(errno, errmsg, sizeof(errmsg)); cp = libbpf_strerror_r(errno, errmsg, sizeof(errmsg));
pr_warning("failed to create map (name: '%s'): %s\n", pr_warning("failed to create map (name: '%s'): %s\n",
map->name, cp); map->name, cp);
@ -1257,6 +1594,15 @@ bpf_object__create_maps(struct bpf_object *obj)
zclose(obj->maps[j].fd); zclose(obj->maps[j].fd);
return err; return err;
} }
if (bpf_map__is_internal(map)) {
err = bpf_object__populate_internal_map(obj, map);
if (err < 0) {
zclose(*pfd);
goto err_out;
}
}
pr_debug("create map %s: fd=%d\n", map->name, *pfd); pr_debug("create map %s: fd=%d\n", map->name, *pfd);
} }
@ -1411,21 +1757,29 @@ bpf_program__relocate(struct bpf_program *prog, struct bpf_object *obj)
return 0; return 0;
for (i = 0; i < prog->nr_reloc; i++) { for (i = 0; i < prog->nr_reloc; i++) {
if (prog->reloc_desc[i].type == RELO_LD64) { if (prog->reloc_desc[i].type == RELO_LD64 ||
prog->reloc_desc[i].type == RELO_DATA) {
bool relo_data = prog->reloc_desc[i].type == RELO_DATA;
struct bpf_insn *insns = prog->insns; struct bpf_insn *insns = prog->insns;
int insn_idx, map_idx; int insn_idx, map_idx;
insn_idx = prog->reloc_desc[i].insn_idx; insn_idx = prog->reloc_desc[i].insn_idx;
map_idx = prog->reloc_desc[i].map_idx; map_idx = prog->reloc_desc[i].map_idx;
if (insn_idx >= (int)prog->insns_cnt) { if (insn_idx + 1 >= (int)prog->insns_cnt) {
pr_warning("relocation out of range: '%s'\n", pr_warning("relocation out of range: '%s'\n",
prog->section_name); prog->section_name);
return -LIBBPF_ERRNO__RELOC; return -LIBBPF_ERRNO__RELOC;
} }
insns[insn_idx].src_reg = BPF_PSEUDO_MAP_FD;
if (!relo_data) {
insns[insn_idx].src_reg = BPF_PSEUDO_MAP_FD;
} else {
insns[insn_idx].src_reg = BPF_PSEUDO_MAP_VALUE;
insns[insn_idx + 1].imm = insns[insn_idx].imm;
}
insns[insn_idx].imm = obj->maps[map_idx].fd; insns[insn_idx].imm = obj->maps[map_idx].fd;
} else { } else if (prog->reloc_desc[i].type == RELO_CALL) {
err = bpf_program__reloc_text(prog, obj, err = bpf_program__reloc_text(prog, obj,
&prog->reloc_desc[i]); &prog->reloc_desc[i]);
if (err) if (err)
@ -2319,6 +2673,9 @@ void bpf_object__close(struct bpf_object *obj)
obj->maps[i].priv = NULL; obj->maps[i].priv = NULL;
obj->maps[i].clear_priv = NULL; obj->maps[i].clear_priv = NULL;
} }
zfree(&obj->sections.rodata);
zfree(&obj->sections.data);
zfree(&obj->maps); zfree(&obj->maps);
obj->nr_maps = 0; obj->nr_maps = 0;
@ -2796,6 +3153,11 @@ bool bpf_map__is_offload_neutral(struct bpf_map *map)
return map->def.type == BPF_MAP_TYPE_PERF_EVENT_ARRAY; return map->def.type == BPF_MAP_TYPE_PERF_EVENT_ARRAY;
} }
bool bpf_map__is_internal(struct bpf_map *map)
{
return map->libbpf_type != LIBBPF_MAP_UNSPEC;
}
void bpf_map__set_ifindex(struct bpf_map *map, __u32 ifindex) void bpf_map__set_ifindex(struct bpf_map *map, __u32 ifindex)
{ {
map->map_ifindex = ifindex; map->map_ifindex = ifindex;

Просмотреть файл

@ -75,6 +75,10 @@ struct bpf_object *__bpf_object__open_xattr(struct bpf_object_open_attr *attr,
LIBBPF_API struct bpf_object *bpf_object__open_buffer(void *obj_buf, LIBBPF_API struct bpf_object *bpf_object__open_buffer(void *obj_buf,
size_t obj_buf_sz, size_t obj_buf_sz,
const char *name); const char *name);
int bpf_object__section_size(const struct bpf_object *obj, const char *name,
__u32 *size);
int bpf_object__variable_offset(const struct bpf_object *obj, const char *name,
__u32 *off);
LIBBPF_API int bpf_object__pin_maps(struct bpf_object *obj, const char *path); LIBBPF_API int bpf_object__pin_maps(struct bpf_object *obj, const char *path);
LIBBPF_API int bpf_object__unpin_maps(struct bpf_object *obj, LIBBPF_API int bpf_object__unpin_maps(struct bpf_object *obj,
const char *path); const char *path);
@ -301,6 +305,7 @@ LIBBPF_API void *bpf_map__priv(struct bpf_map *map);
LIBBPF_API int bpf_map__reuse_fd(struct bpf_map *map, int fd); LIBBPF_API int bpf_map__reuse_fd(struct bpf_map *map, int fd);
LIBBPF_API int bpf_map__resize(struct bpf_map *map, __u32 max_entries); LIBBPF_API int bpf_map__resize(struct bpf_map *map, __u32 max_entries);
LIBBPF_API bool bpf_map__is_offload_neutral(struct bpf_map *map); LIBBPF_API bool bpf_map__is_offload_neutral(struct bpf_map *map);
LIBBPF_API bool bpf_map__is_internal(struct bpf_map *map);
LIBBPF_API void bpf_map__set_ifindex(struct bpf_map *map, __u32 ifindex); LIBBPF_API void bpf_map__set_ifindex(struct bpf_map *map, __u32 ifindex);
LIBBPF_API int bpf_map__pin(struct bpf_map *map, const char *path); LIBBPF_API int bpf_map__pin(struct bpf_map *map, const char *path);
LIBBPF_API int bpf_map__unpin(struct bpf_map *map, const char *path); LIBBPF_API int bpf_map__unpin(struct bpf_map *map, const char *path);

Просмотреть файл

@ -157,3 +157,10 @@ LIBBPF_0.0.2 {
bpf_program__bpil_addr_to_offs; bpf_program__bpil_addr_to_offs;
bpf_program__bpil_offs_to_addr; bpf_program__bpil_offs_to_addr;
} LIBBPF_0.0.1; } LIBBPF_0.0.1;
LIBBPF_0.0.3 {
global:
bpf_map__is_internal;
bpf_map_freeze;
btf__finalize_data;
} LIBBPF_0.0.2;

Просмотреть файл

@ -9,14 +9,14 @@
#define SEC(NAME) __attribute__((section(NAME), used)) #define SEC(NAME) __attribute__((section(NAME), used))
/* helper functions called from eBPF programs written in C */ /* helper functions called from eBPF programs written in C */
static void *(*bpf_map_lookup_elem)(void *map, void *key) = static void *(*bpf_map_lookup_elem)(void *map, const void *key) =
(void *) BPF_FUNC_map_lookup_elem; (void *) BPF_FUNC_map_lookup_elem;
static int (*bpf_map_update_elem)(void *map, void *key, void *value, static int (*bpf_map_update_elem)(void *map, const void *key, const void *value,
unsigned long long flags) = unsigned long long flags) =
(void *) BPF_FUNC_map_update_elem; (void *) BPF_FUNC_map_update_elem;
static int (*bpf_map_delete_elem)(void *map, void *key) = static int (*bpf_map_delete_elem)(void *map, const void *key) =
(void *) BPF_FUNC_map_delete_elem; (void *) BPF_FUNC_map_delete_elem;
static int (*bpf_map_push_elem)(void *map, void *value, static int (*bpf_map_push_elem)(void *map, const void *value,
unsigned long long flags) = unsigned long long flags) =
(void *) BPF_FUNC_map_push_elem; (void *) BPF_FUNC_map_push_elem;
static int (*bpf_map_pop_elem)(void *map, void *value) = static int (*bpf_map_pop_elem)(void *map, void *value) =

Просмотреть файл

@ -0,0 +1,157 @@
// SPDX-License-Identifier: GPL-2.0
#include <test_progs.h>
static void test_global_data_number(struct bpf_object *obj, __u32 duration)
{
int i, err, map_fd;
uint64_t num;
map_fd = bpf_find_map(__func__, obj, "result_number");
if (map_fd < 0) {
error_cnt++;
return;
}
struct {
char *name;
uint32_t key;
uint64_t num;
} tests[] = {
{ "relocate .bss reference", 0, 0 },
{ "relocate .data reference", 1, 42 },
{ "relocate .rodata reference", 2, 24 },
{ "relocate .bss reference", 3, 0 },
{ "relocate .data reference", 4, 0xffeeff },
{ "relocate .rodata reference", 5, 0xabab },
{ "relocate .bss reference", 6, 1234 },
{ "relocate .bss reference", 7, 0 },
{ "relocate .rodata reference", 8, 0xab },
{ "relocate .rodata reference", 9, 0x1111111111111111 },
{ "relocate .rodata reference", 10, ~0 },
};
for (i = 0; i < sizeof(tests) / sizeof(tests[0]); i++) {
err = bpf_map_lookup_elem(map_fd, &tests[i].key, &num);
CHECK(err || num != tests[i].num, tests[i].name,
"err %d result %lx expected %lx\n",
err, num, tests[i].num);
}
}
static void test_global_data_string(struct bpf_object *obj, __u32 duration)
{
int i, err, map_fd;
char str[32];
map_fd = bpf_find_map(__func__, obj, "result_string");
if (map_fd < 0) {
error_cnt++;
return;
}
struct {
char *name;
uint32_t key;
char str[32];
} tests[] = {
{ "relocate .rodata reference", 0, "abcdefghijklmnopqrstuvwxyz" },
{ "relocate .data reference", 1, "abcdefghijklmnopqrstuvwxyz" },
{ "relocate .bss reference", 2, "" },
{ "relocate .data reference", 3, "abcdexghijklmnopqrstuvwxyz" },
{ "relocate .bss reference", 4, "\0\0hello" },
};
for (i = 0; i < sizeof(tests) / sizeof(tests[0]); i++) {
err = bpf_map_lookup_elem(map_fd, &tests[i].key, str);
CHECK(err || memcmp(str, tests[i].str, sizeof(str)),
tests[i].name, "err %d result \'%s\' expected \'%s\'\n",
err, str, tests[i].str);
}
}
struct foo {
__u8 a;
__u32 b;
__u64 c;
};
static void test_global_data_struct(struct bpf_object *obj, __u32 duration)
{
int i, err, map_fd;
struct foo val;
map_fd = bpf_find_map(__func__, obj, "result_struct");
if (map_fd < 0) {
error_cnt++;
return;
}
struct {
char *name;
uint32_t key;
struct foo val;
} tests[] = {
{ "relocate .rodata reference", 0, { 42, 0xfefeefef, 0x1111111111111111ULL, } },
{ "relocate .bss reference", 1, { } },
{ "relocate .rodata reference", 2, { } },
{ "relocate .data reference", 3, { 41, 0xeeeeefef, 0x2111111111111111ULL, } },
};
for (i = 0; i < sizeof(tests) / sizeof(tests[0]); i++) {
err = bpf_map_lookup_elem(map_fd, &tests[i].key, &val);
CHECK(err || memcmp(&val, &tests[i].val, sizeof(val)),
tests[i].name, "err %d result { %u, %u, %llu } expected { %u, %u, %llu }\n",
err, val.a, val.b, val.c, tests[i].val.a, tests[i].val.b, tests[i].val.c);
}
}
static void test_global_data_rdonly(struct bpf_object *obj, __u32 duration)
{
int err = -ENOMEM, map_fd, zero = 0;
struct bpf_map *map;
__u8 *buff;
map = bpf_object__find_map_by_name(obj, "test_glo.rodata");
if (!map || !bpf_map__is_internal(map)) {
error_cnt++;
return;
}
map_fd = bpf_map__fd(map);
if (map_fd < 0) {
error_cnt++;
return;
}
buff = malloc(bpf_map__def(map)->value_size);
if (buff)
err = bpf_map_update_elem(map_fd, &zero, buff, 0);
free(buff);
CHECK(!err || errno != EPERM, "test .rodata read-only map",
"err %d errno %d\n", err, errno);
}
void test_global_data(void)
{
const char *file = "./test_global_data.o";
__u32 duration = 0, retval;
struct bpf_object *obj;
int err, prog_fd;
err = bpf_prog_load(file, BPF_PROG_TYPE_SCHED_CLS, &obj, &prog_fd);
if (CHECK(err, "load program", "error %d loading %s\n", err, file))
return;
err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4),
NULL, NULL, &retval, &duration);
CHECK(err || retval, "pass global data run",
"err %d errno %d retval %d duration %d\n",
err, errno, retval, duration);
test_global_data_number(obj, duration);
test_global_data_string(obj, duration);
test_global_data_struct(obj, duration);
test_global_data_rdonly(obj, duration);
bpf_object__close(obj);
}

Просмотреть файл

@ -0,0 +1,106 @@
// SPDX-License-Identifier: GPL-2.0
// Copyright (c) 2019 Isovalent, Inc.
#include <linux/bpf.h>
#include <linux/pkt_cls.h>
#include <string.h>
#include "bpf_helpers.h"
struct bpf_map_def SEC("maps") result_number = {
.type = BPF_MAP_TYPE_ARRAY,
.key_size = sizeof(__u32),
.value_size = sizeof(__u64),
.max_entries = 11,
};
struct bpf_map_def SEC("maps") result_string = {
.type = BPF_MAP_TYPE_ARRAY,
.key_size = sizeof(__u32),
.value_size = 32,
.max_entries = 5,
};
struct foo {
__u8 a;
__u32 b;
__u64 c;
};
struct bpf_map_def SEC("maps") result_struct = {
.type = BPF_MAP_TYPE_ARRAY,
.key_size = sizeof(__u32),
.value_size = sizeof(struct foo),
.max_entries = 5,
};
/* Relocation tests for __u64s. */
static __u64 num0;
static __u64 num1 = 42;
static const __u64 num2 = 24;
static __u64 num3 = 0;
static __u64 num4 = 0xffeeff;
static const __u64 num5 = 0xabab;
static const __u64 num6 = 0xab;
/* Relocation tests for strings. */
static const char str0[32] = "abcdefghijklmnopqrstuvwxyz";
static char str1[32] = "abcdefghijklmnopqrstuvwxyz";
static char str2[32];
/* Relocation tests for structs. */
static const struct foo struct0 = {
.a = 42,
.b = 0xfefeefef,
.c = 0x1111111111111111ULL,
};
static struct foo struct1;
static const struct foo struct2;
static struct foo struct3 = {
.a = 41,
.b = 0xeeeeefef,
.c = 0x2111111111111111ULL,
};
#define test_reloc(map, num, var) \
do { \
__u32 key = num; \
bpf_map_update_elem(&result_##map, &key, var, 0); \
} while (0)
SEC("static_data_load")
int load_static_data(struct __sk_buff *skb)
{
static const __u64 bar = ~0;
test_reloc(number, 0, &num0);
test_reloc(number, 1, &num1);
test_reloc(number, 2, &num2);
test_reloc(number, 3, &num3);
test_reloc(number, 4, &num4);
test_reloc(number, 5, &num5);
num4 = 1234;
test_reloc(number, 6, &num4);
test_reloc(number, 7, &num0);
test_reloc(number, 8, &num6);
test_reloc(string, 0, str0);
test_reloc(string, 1, str1);
test_reloc(string, 2, str2);
str1[5] = 'x';
test_reloc(string, 3, str1);
__builtin_memcpy(&str2[2], "hello", sizeof("hello"));
test_reloc(string, 4, str2);
test_reloc(struct, 0, &struct0);
test_reloc(struct, 1, &struct1);
test_reloc(struct, 2, &struct2);
test_reloc(struct, 3, &struct3);
test_reloc(number, 9, &struct0.c);
test_reloc(number, 10, &bar);
return TC_ACT_OK;
}
char _license[] SEC("license") = "GPL";

Просмотреть файл

@ -85,6 +85,11 @@ static int __base_pr(enum libbpf_print_level level __attribute__((unused)),
#define BTF_UNION_ENC(name, nr_elems, sz) \ #define BTF_UNION_ENC(name, nr_elems, sz) \
BTF_TYPE_ENC(name, BTF_INFO_ENC(BTF_KIND_UNION, 0, nr_elems), sz) BTF_TYPE_ENC(name, BTF_INFO_ENC(BTF_KIND_UNION, 0, nr_elems), sz)
#define BTF_VAR_ENC(name, type, linkage) \
BTF_TYPE_ENC(name, BTF_INFO_ENC(BTF_KIND_VAR, 0, 0), type), (linkage)
#define BTF_VAR_SECINFO_ENC(type, offset, size) \
(type), (offset), (size)
#define BTF_MEMBER_ENC(name, type, bits_offset) \ #define BTF_MEMBER_ENC(name, type, bits_offset) \
(name), (type), (bits_offset) (name), (type), (bits_offset)
#define BTF_ENUM_ENC(name, val) (name), (val) #define BTF_ENUM_ENC(name, val) (name), (val)
@ -291,7 +296,6 @@ static struct btf_raw_test raw_tests[] = {
.value_type_id = 3, .value_type_id = 3,
.max_entries = 4, .max_entries = 4,
}, },
{ {
.descr = "struct test #3 Invalid member offset", .descr = "struct test #3 Invalid member offset",
.raw_types = { .raw_types = {
@ -319,7 +323,664 @@ static struct btf_raw_test raw_tests[] = {
.btf_load_err = true, .btf_load_err = true,
.err_str = "Invalid member bits_offset", .err_str = "Invalid member bits_offset",
}, },
/*
* struct A {
* unsigned long long m;
* int n;
* char o;
* [3 bytes hole]
* int p[8];
* };
*/
{
.descr = "global data test #1",
.raw_types = {
/* int */
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
/* unsigned long long */
BTF_TYPE_INT_ENC(0, 0, 0, 64, 8), /* [2] */
/* char */
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 8, 1), /* [3] */
/* int[8] */
BTF_TYPE_ARRAY_ENC(1, 1, 8), /* [4] */
/* struct A { */ /* [5] */
BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_STRUCT, 0, 4), 48),
BTF_MEMBER_ENC(NAME_TBD, 2, 0), /* unsigned long long m;*/
BTF_MEMBER_ENC(NAME_TBD, 1, 64),/* int n; */
BTF_MEMBER_ENC(NAME_TBD, 3, 96),/* char o; */
BTF_MEMBER_ENC(NAME_TBD, 4, 128),/* int p[8] */
/* } */
BTF_END_RAW,
},
.str_sec = "\0A\0m\0n\0o\0p",
.str_sec_size = sizeof("\0A\0m\0n\0o\0p"),
.map_type = BPF_MAP_TYPE_ARRAY,
.map_name = "struct_test1_map",
.key_size = sizeof(int),
.value_size = 48,
.key_type_id = 1,
.value_type_id = 5,
.max_entries = 4,
},
/*
* struct A {
* unsigned long long m;
* int n;
* char o;
* [3 bytes hole]
* int p[8];
* };
* static struct A t; <- in .bss
*/
{
.descr = "global data test #2",
.raw_types = {
/* int */
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
/* unsigned long long */
BTF_TYPE_INT_ENC(0, 0, 0, 64, 8), /* [2] */
/* char */
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 8, 1), /* [3] */
/* int[8] */
BTF_TYPE_ARRAY_ENC(1, 1, 8), /* [4] */
/* struct A { */ /* [5] */
BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_STRUCT, 0, 4), 48),
BTF_MEMBER_ENC(NAME_TBD, 2, 0), /* unsigned long long m;*/
BTF_MEMBER_ENC(NAME_TBD, 1, 64),/* int n; */
BTF_MEMBER_ENC(NAME_TBD, 3, 96),/* char o; */
BTF_MEMBER_ENC(NAME_TBD, 4, 128),/* int p[8] */
/* } */
/* static struct A t */
BTF_VAR_ENC(NAME_TBD, 5, 0), /* [6] */
/* .bss section */ /* [7] */
BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_DATASEC, 0, 1), 48),
BTF_VAR_SECINFO_ENC(6, 0, 48),
BTF_END_RAW,
},
.str_sec = "\0A\0m\0n\0o\0p\0t\0.bss",
.str_sec_size = sizeof("\0A\0m\0n\0o\0p\0t\0.bss"),
.map_type = BPF_MAP_TYPE_ARRAY,
.map_name = ".bss",
.key_size = sizeof(int),
.value_size = 48,
.key_type_id = 0,
.value_type_id = 7,
.max_entries = 1,
},
{
.descr = "global data test #3",
.raw_types = {
/* int */
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
/* static int t */
BTF_VAR_ENC(NAME_TBD, 1, 0), /* [2] */
/* .bss section */ /* [3] */
BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_DATASEC, 0, 1), 4),
BTF_VAR_SECINFO_ENC(2, 0, 4),
BTF_END_RAW,
},
.str_sec = "\0t\0.bss",
.str_sec_size = sizeof("\0t\0.bss"),
.map_type = BPF_MAP_TYPE_ARRAY,
.map_name = ".bss",
.key_size = sizeof(int),
.value_size = 4,
.key_type_id = 0,
.value_type_id = 3,
.max_entries = 1,
},
{
.descr = "global data test #4, unsupported linkage",
.raw_types = {
/* int */
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
/* static int t */
BTF_VAR_ENC(NAME_TBD, 1, 2), /* [2] */
/* .bss section */ /* [3] */
BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_DATASEC, 0, 1), 4),
BTF_VAR_SECINFO_ENC(2, 0, 4),
BTF_END_RAW,
},
.str_sec = "\0t\0.bss",
.str_sec_size = sizeof("\0t\0.bss"),
.map_type = BPF_MAP_TYPE_ARRAY,
.map_name = ".bss",
.key_size = sizeof(int),
.value_size = 4,
.key_type_id = 0,
.value_type_id = 3,
.max_entries = 1,
.btf_load_err = true,
.err_str = "Linkage not supported",
},
{
.descr = "global data test #5, invalid var type",
.raw_types = {
/* static void t */
BTF_VAR_ENC(NAME_TBD, 0, 0), /* [1] */
/* .bss section */ /* [2] */
BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_DATASEC, 0, 1), 4),
BTF_VAR_SECINFO_ENC(1, 0, 4),
BTF_END_RAW,
},
.str_sec = "\0t\0.bss",
.str_sec_size = sizeof("\0t\0.bss"),
.map_type = BPF_MAP_TYPE_ARRAY,
.map_name = ".bss",
.key_size = sizeof(int),
.value_size = 4,
.key_type_id = 0,
.value_type_id = 2,
.max_entries = 1,
.btf_load_err = true,
.err_str = "Invalid type_id",
},
{
.descr = "global data test #6, invalid var type (fwd type)",
.raw_types = {
/* union A */
BTF_TYPE_ENC(NAME_TBD,
BTF_INFO_ENC(BTF_KIND_FWD, 1, 0), 0), /* [1] */
/* static union A t */
BTF_VAR_ENC(NAME_TBD, 1, 0), /* [2] */
/* .bss section */ /* [3] */
BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_DATASEC, 0, 1), 4),
BTF_VAR_SECINFO_ENC(2, 0, 4),
BTF_END_RAW,
},
.str_sec = "\0A\0t\0.bss",
.str_sec_size = sizeof("\0A\0t\0.bss"),
.map_type = BPF_MAP_TYPE_ARRAY,
.map_name = ".bss",
.key_size = sizeof(int),
.value_size = 4,
.key_type_id = 0,
.value_type_id = 2,
.max_entries = 1,
.btf_load_err = true,
.err_str = "Invalid type",
},
{
.descr = "global data test #7, invalid var type (fwd type)",
.raw_types = {
/* union A */
BTF_TYPE_ENC(NAME_TBD,
BTF_INFO_ENC(BTF_KIND_FWD, 1, 0), 0), /* [1] */
/* static union A t */
BTF_VAR_ENC(NAME_TBD, 1, 0), /* [2] */
/* .bss section */ /* [3] */
BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_DATASEC, 0, 1), 4),
BTF_VAR_SECINFO_ENC(1, 0, 4),
BTF_END_RAW,
},
.str_sec = "\0A\0t\0.bss",
.str_sec_size = sizeof("\0A\0t\0.bss"),
.map_type = BPF_MAP_TYPE_ARRAY,
.map_name = ".bss",
.key_size = sizeof(int),
.value_size = 4,
.key_type_id = 0,
.value_type_id = 2,
.max_entries = 1,
.btf_load_err = true,
.err_str = "Invalid type",
},
{
.descr = "global data test #8, invalid var size",
.raw_types = {
/* int */
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
/* unsigned long long */
BTF_TYPE_INT_ENC(0, 0, 0, 64, 8), /* [2] */
/* char */
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 8, 1), /* [3] */
/* int[8] */
BTF_TYPE_ARRAY_ENC(1, 1, 8), /* [4] */
/* struct A { */ /* [5] */
BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_STRUCT, 0, 4), 48),
BTF_MEMBER_ENC(NAME_TBD, 2, 0), /* unsigned long long m;*/
BTF_MEMBER_ENC(NAME_TBD, 1, 64),/* int n; */
BTF_MEMBER_ENC(NAME_TBD, 3, 96),/* char o; */
BTF_MEMBER_ENC(NAME_TBD, 4, 128),/* int p[8] */
/* } */
/* static struct A t */
BTF_VAR_ENC(NAME_TBD, 5, 0), /* [6] */
/* .bss section */ /* [7] */
BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_DATASEC, 0, 1), 48),
BTF_VAR_SECINFO_ENC(6, 0, 47),
BTF_END_RAW,
},
.str_sec = "\0A\0m\0n\0o\0p\0t\0.bss",
.str_sec_size = sizeof("\0A\0m\0n\0o\0p\0t\0.bss"),
.map_type = BPF_MAP_TYPE_ARRAY,
.map_name = ".bss",
.key_size = sizeof(int),
.value_size = 48,
.key_type_id = 0,
.value_type_id = 7,
.max_entries = 1,
.btf_load_err = true,
.err_str = "Invalid size",
},
{
.descr = "global data test #9, invalid var size",
.raw_types = {
/* int */
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
/* unsigned long long */
BTF_TYPE_INT_ENC(0, 0, 0, 64, 8), /* [2] */
/* char */
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 8, 1), /* [3] */
/* int[8] */
BTF_TYPE_ARRAY_ENC(1, 1, 8), /* [4] */
/* struct A { */ /* [5] */
BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_STRUCT, 0, 4), 48),
BTF_MEMBER_ENC(NAME_TBD, 2, 0), /* unsigned long long m;*/
BTF_MEMBER_ENC(NAME_TBD, 1, 64),/* int n; */
BTF_MEMBER_ENC(NAME_TBD, 3, 96),/* char o; */
BTF_MEMBER_ENC(NAME_TBD, 4, 128),/* int p[8] */
/* } */
/* static struct A t */
BTF_VAR_ENC(NAME_TBD, 5, 0), /* [6] */
/* .bss section */ /* [7] */
BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_DATASEC, 0, 1), 46),
BTF_VAR_SECINFO_ENC(6, 0, 48),
BTF_END_RAW,
},
.str_sec = "\0A\0m\0n\0o\0p\0t\0.bss",
.str_sec_size = sizeof("\0A\0m\0n\0o\0p\0t\0.bss"),
.map_type = BPF_MAP_TYPE_ARRAY,
.map_name = ".bss",
.key_size = sizeof(int),
.value_size = 48,
.key_type_id = 0,
.value_type_id = 7,
.max_entries = 1,
.btf_load_err = true,
.err_str = "Invalid size",
},
{
.descr = "global data test #10, invalid var size",
.raw_types = {
/* int */
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
/* unsigned long long */
BTF_TYPE_INT_ENC(0, 0, 0, 64, 8), /* [2] */
/* char */
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 8, 1), /* [3] */
/* int[8] */
BTF_TYPE_ARRAY_ENC(1, 1, 8), /* [4] */
/* struct A { */ /* [5] */
BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_STRUCT, 0, 4), 48),
BTF_MEMBER_ENC(NAME_TBD, 2, 0), /* unsigned long long m;*/
BTF_MEMBER_ENC(NAME_TBD, 1, 64),/* int n; */
BTF_MEMBER_ENC(NAME_TBD, 3, 96),/* char o; */
BTF_MEMBER_ENC(NAME_TBD, 4, 128),/* int p[8] */
/* } */
/* static struct A t */
BTF_VAR_ENC(NAME_TBD, 5, 0), /* [6] */
/* .bss section */ /* [7] */
BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_DATASEC, 0, 1), 46),
BTF_VAR_SECINFO_ENC(6, 0, 46),
BTF_END_RAW,
},
.str_sec = "\0A\0m\0n\0o\0p\0t\0.bss",
.str_sec_size = sizeof("\0A\0m\0n\0o\0p\0t\0.bss"),
.map_type = BPF_MAP_TYPE_ARRAY,
.map_name = ".bss",
.key_size = sizeof(int),
.value_size = 48,
.key_type_id = 0,
.value_type_id = 7,
.max_entries = 1,
.btf_load_err = true,
.err_str = "Invalid size",
},
{
.descr = "global data test #11, multiple section members",
.raw_types = {
/* int */
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
/* unsigned long long */
BTF_TYPE_INT_ENC(0, 0, 0, 64, 8), /* [2] */
/* char */
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 8, 1), /* [3] */
/* int[8] */
BTF_TYPE_ARRAY_ENC(1, 1, 8), /* [4] */
/* struct A { */ /* [5] */
BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_STRUCT, 0, 4), 48),
BTF_MEMBER_ENC(NAME_TBD, 2, 0), /* unsigned long long m;*/
BTF_MEMBER_ENC(NAME_TBD, 1, 64),/* int n; */
BTF_MEMBER_ENC(NAME_TBD, 3, 96),/* char o; */
BTF_MEMBER_ENC(NAME_TBD, 4, 128),/* int p[8] */
/* } */
/* static struct A t */
BTF_VAR_ENC(NAME_TBD, 5, 0), /* [6] */
/* static int u */
BTF_VAR_ENC(NAME_TBD, 1, 0), /* [7] */
/* .bss section */ /* [8] */
BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_DATASEC, 0, 2), 62),
BTF_VAR_SECINFO_ENC(6, 10, 48),
BTF_VAR_SECINFO_ENC(7, 58, 4),
BTF_END_RAW,
},
.str_sec = "\0A\0m\0n\0o\0p\0t\0u\0.bss",
.str_sec_size = sizeof("\0A\0m\0n\0o\0p\0t\0u\0.bss"),
.map_type = BPF_MAP_TYPE_ARRAY,
.map_name = ".bss",
.key_size = sizeof(int),
.value_size = 62,
.key_type_id = 0,
.value_type_id = 8,
.max_entries = 1,
},
{
.descr = "global data test #12, invalid offset",
.raw_types = {
/* int */
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
/* unsigned long long */
BTF_TYPE_INT_ENC(0, 0, 0, 64, 8), /* [2] */
/* char */
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 8, 1), /* [3] */
/* int[8] */
BTF_TYPE_ARRAY_ENC(1, 1, 8), /* [4] */
/* struct A { */ /* [5] */
BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_STRUCT, 0, 4), 48),
BTF_MEMBER_ENC(NAME_TBD, 2, 0), /* unsigned long long m;*/
BTF_MEMBER_ENC(NAME_TBD, 1, 64),/* int n; */
BTF_MEMBER_ENC(NAME_TBD, 3, 96),/* char o; */
BTF_MEMBER_ENC(NAME_TBD, 4, 128),/* int p[8] */
/* } */
/* static struct A t */
BTF_VAR_ENC(NAME_TBD, 5, 0), /* [6] */
/* static int u */
BTF_VAR_ENC(NAME_TBD, 1, 0), /* [7] */
/* .bss section */ /* [8] */
BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_DATASEC, 0, 2), 62),
BTF_VAR_SECINFO_ENC(6, 10, 48),
BTF_VAR_SECINFO_ENC(7, 60, 4),
BTF_END_RAW,
},
.str_sec = "\0A\0m\0n\0o\0p\0t\0u\0.bss",
.str_sec_size = sizeof("\0A\0m\0n\0o\0p\0t\0u\0.bss"),
.map_type = BPF_MAP_TYPE_ARRAY,
.map_name = ".bss",
.key_size = sizeof(int),
.value_size = 62,
.key_type_id = 0,
.value_type_id = 8,
.max_entries = 1,
.btf_load_err = true,
.err_str = "Invalid offset+size",
},
{
.descr = "global data test #13, invalid offset",
.raw_types = {
/* int */
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
/* unsigned long long */
BTF_TYPE_INT_ENC(0, 0, 0, 64, 8), /* [2] */
/* char */
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 8, 1), /* [3] */
/* int[8] */
BTF_TYPE_ARRAY_ENC(1, 1, 8), /* [4] */
/* struct A { */ /* [5] */
BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_STRUCT, 0, 4), 48),
BTF_MEMBER_ENC(NAME_TBD, 2, 0), /* unsigned long long m;*/
BTF_MEMBER_ENC(NAME_TBD, 1, 64),/* int n; */
BTF_MEMBER_ENC(NAME_TBD, 3, 96),/* char o; */
BTF_MEMBER_ENC(NAME_TBD, 4, 128),/* int p[8] */
/* } */
/* static struct A t */
BTF_VAR_ENC(NAME_TBD, 5, 0), /* [6] */
/* static int u */
BTF_VAR_ENC(NAME_TBD, 1, 0), /* [7] */
/* .bss section */ /* [8] */
BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_DATASEC, 0, 2), 62),
BTF_VAR_SECINFO_ENC(6, 10, 48),
BTF_VAR_SECINFO_ENC(7, 12, 4),
BTF_END_RAW,
},
.str_sec = "\0A\0m\0n\0o\0p\0t\0u\0.bss",
.str_sec_size = sizeof("\0A\0m\0n\0o\0p\0t\0u\0.bss"),
.map_type = BPF_MAP_TYPE_ARRAY,
.map_name = ".bss",
.key_size = sizeof(int),
.value_size = 62,
.key_type_id = 0,
.value_type_id = 8,
.max_entries = 1,
.btf_load_err = true,
.err_str = "Invalid offset",
},
{
.descr = "global data test #14, invalid offset",
.raw_types = {
/* int */
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
/* unsigned long long */
BTF_TYPE_INT_ENC(0, 0, 0, 64, 8), /* [2] */
/* char */
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 8, 1), /* [3] */
/* int[8] */
BTF_TYPE_ARRAY_ENC(1, 1, 8), /* [4] */
/* struct A { */ /* [5] */
BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_STRUCT, 0, 4), 48),
BTF_MEMBER_ENC(NAME_TBD, 2, 0), /* unsigned long long m;*/
BTF_MEMBER_ENC(NAME_TBD, 1, 64),/* int n; */
BTF_MEMBER_ENC(NAME_TBD, 3, 96),/* char o; */
BTF_MEMBER_ENC(NAME_TBD, 4, 128),/* int p[8] */
/* } */
/* static struct A t */
BTF_VAR_ENC(NAME_TBD, 5, 0), /* [6] */
/* static int u */
BTF_VAR_ENC(NAME_TBD, 1, 0), /* [7] */
/* .bss section */ /* [8] */
BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_DATASEC, 0, 2), 62),
BTF_VAR_SECINFO_ENC(7, 58, 4),
BTF_VAR_SECINFO_ENC(6, 10, 48),
BTF_END_RAW,
},
.str_sec = "\0A\0m\0n\0o\0p\0t\0u\0.bss",
.str_sec_size = sizeof("\0A\0m\0n\0o\0p\0t\0u\0.bss"),
.map_type = BPF_MAP_TYPE_ARRAY,
.map_name = ".bss",
.key_size = sizeof(int),
.value_size = 62,
.key_type_id = 0,
.value_type_id = 8,
.max_entries = 1,
.btf_load_err = true,
.err_str = "Invalid offset",
},
{
.descr = "global data test #15, not var kind",
.raw_types = {
/* int */
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
BTF_VAR_ENC(NAME_TBD, 1, 0), /* [2] */
/* .bss section */ /* [3] */
BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_DATASEC, 0, 1), 4),
BTF_VAR_SECINFO_ENC(1, 0, 4),
BTF_END_RAW,
},
.str_sec = "\0A\0t\0.bss",
.str_sec_size = sizeof("\0A\0t\0.bss"),
.map_type = BPF_MAP_TYPE_ARRAY,
.map_name = ".bss",
.key_size = sizeof(int),
.value_size = 4,
.key_type_id = 0,
.value_type_id = 3,
.max_entries = 1,
.btf_load_err = true,
.err_str = "Not a VAR kind member",
},
{
.descr = "global data test #16, invalid var referencing sec",
.raw_types = {
/* int */
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
BTF_VAR_ENC(NAME_TBD, 5, 0), /* [2] */
BTF_VAR_ENC(NAME_TBD, 2, 0), /* [3] */
/* a section */ /* [4] */
BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_DATASEC, 0, 1), 4),
BTF_VAR_SECINFO_ENC(3, 0, 4),
/* a section */ /* [5] */
BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_DATASEC, 0, 1), 4),
BTF_VAR_SECINFO_ENC(6, 0, 4),
BTF_VAR_ENC(NAME_TBD, 1, 0), /* [6] */
BTF_END_RAW,
},
.str_sec = "\0A\0t\0s\0a\0a",
.str_sec_size = sizeof("\0A\0t\0s\0a\0a"),
.map_type = BPF_MAP_TYPE_ARRAY,
.map_name = ".bss",
.key_size = sizeof(int),
.value_size = 4,
.key_type_id = 0,
.value_type_id = 4,
.max_entries = 1,
.btf_load_err = true,
.err_str = "Invalid type_id",
},
{
.descr = "global data test #17, invalid var referencing var",
.raw_types = {
/* int */
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
BTF_VAR_ENC(NAME_TBD, 1, 0), /* [2] */
BTF_VAR_ENC(NAME_TBD, 2, 0), /* [3] */
/* a section */ /* [4] */
BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_DATASEC, 0, 1), 4),
BTF_VAR_SECINFO_ENC(3, 0, 4),
BTF_END_RAW,
},
.str_sec = "\0A\0t\0s\0a\0a",
.str_sec_size = sizeof("\0A\0t\0s\0a\0a"),
.map_type = BPF_MAP_TYPE_ARRAY,
.map_name = ".bss",
.key_size = sizeof(int),
.value_size = 4,
.key_type_id = 0,
.value_type_id = 4,
.max_entries = 1,
.btf_load_err = true,
.err_str = "Invalid type_id",
},
{
.descr = "global data test #18, invalid var loop",
.raw_types = {
/* int */
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
BTF_VAR_ENC(NAME_TBD, 2, 0), /* [2] */
/* .bss section */ /* [3] */
BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_DATASEC, 0, 1), 4),
BTF_VAR_SECINFO_ENC(2, 0, 4),
BTF_END_RAW,
},
.str_sec = "\0A\0t\0aaa",
.str_sec_size = sizeof("\0A\0t\0aaa"),
.map_type = BPF_MAP_TYPE_ARRAY,
.map_name = ".bss",
.key_size = sizeof(int),
.value_size = 4,
.key_type_id = 0,
.value_type_id = 4,
.max_entries = 1,
.btf_load_err = true,
.err_str = "Invalid type_id",
},
{
.descr = "global data test #19, invalid var referencing var",
.raw_types = {
/* int */
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
BTF_VAR_ENC(NAME_TBD, 3, 0), /* [2] */
BTF_VAR_ENC(NAME_TBD, 1, 0), /* [3] */
BTF_END_RAW,
},
.str_sec = "\0A\0t\0s\0a\0a",
.str_sec_size = sizeof("\0A\0t\0s\0a\0a"),
.map_type = BPF_MAP_TYPE_ARRAY,
.map_name = ".bss",
.key_size = sizeof(int),
.value_size = 4,
.key_type_id = 0,
.value_type_id = 4,
.max_entries = 1,
.btf_load_err = true,
.err_str = "Invalid type_id",
},
{
.descr = "global data test #20, invalid ptr referencing var",
.raw_types = {
/* int */
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
/* PTR type_id=3 */ /* [2] */
BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_PTR, 0, 0), 3),
BTF_VAR_ENC(NAME_TBD, 1, 0), /* [3] */
BTF_END_RAW,
},
.str_sec = "\0A\0t\0s\0a\0a",
.str_sec_size = sizeof("\0A\0t\0s\0a\0a"),
.map_type = BPF_MAP_TYPE_ARRAY,
.map_name = ".bss",
.key_size = sizeof(int),
.value_size = 4,
.key_type_id = 0,
.value_type_id = 4,
.max_entries = 1,
.btf_load_err = true,
.err_str = "Invalid type_id",
},
{
.descr = "global data test #21, var included in struct",
.raw_types = {
/* int */
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
/* struct A { */ /* [2] */
BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_STRUCT, 0, 2), sizeof(int) * 2),
BTF_MEMBER_ENC(NAME_TBD, 1, 0), /* int m; */
BTF_MEMBER_ENC(NAME_TBD, 3, 32),/* VAR type_id=3; */
/* } */
BTF_VAR_ENC(NAME_TBD, 1, 0), /* [3] */
BTF_END_RAW,
},
.str_sec = "\0A\0t\0s\0a\0a",
.str_sec_size = sizeof("\0A\0t\0s\0a\0a"),
.map_type = BPF_MAP_TYPE_ARRAY,
.map_name = ".bss",
.key_size = sizeof(int),
.value_size = 4,
.key_type_id = 0,
.value_type_id = 4,
.max_entries = 1,
.btf_load_err = true,
.err_str = "Invalid member",
},
{
.descr = "global data test #22, array of var",
.raw_types = {
/* int */
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
BTF_TYPE_ARRAY_ENC(3, 1, 4), /* [2] */
BTF_VAR_ENC(NAME_TBD, 1, 0), /* [3] */
BTF_END_RAW,
},
.str_sec = "\0A\0t\0s\0a\0a",
.str_sec_size = sizeof("\0A\0t\0s\0a\0a"),
.map_type = BPF_MAP_TYPE_ARRAY,
.map_name = ".bss",
.key_size = sizeof(int),
.value_size = 4,
.key_type_id = 0,
.value_type_id = 4,
.max_entries = 1,
.btf_load_err = true,
.err_str = "Invalid elem",
},
/* Test member exceeds the size of struct. /* Test member exceeds the size of struct.
* *
* struct A { * struct A {

Просмотреть файл

@ -52,7 +52,7 @@
#define MAX_INSNS BPF_MAXINSNS #define MAX_INSNS BPF_MAXINSNS
#define MAX_TEST_INSNS 1000000 #define MAX_TEST_INSNS 1000000
#define MAX_FIXUPS 8 #define MAX_FIXUPS 8
#define MAX_NR_MAPS 14 #define MAX_NR_MAPS 16
#define MAX_TEST_RUNS 8 #define MAX_TEST_RUNS 8
#define POINTER_VALUE 0xcafe4all #define POINTER_VALUE 0xcafe4all
#define TEST_DATA_LEN 64 #define TEST_DATA_LEN 64
@ -82,6 +82,9 @@ struct bpf_test {
int fixup_cgroup_storage[MAX_FIXUPS]; int fixup_cgroup_storage[MAX_FIXUPS];
int fixup_percpu_cgroup_storage[MAX_FIXUPS]; int fixup_percpu_cgroup_storage[MAX_FIXUPS];
int fixup_map_spin_lock[MAX_FIXUPS]; int fixup_map_spin_lock[MAX_FIXUPS];
int fixup_map_array_ro[MAX_FIXUPS];
int fixup_map_array_wo[MAX_FIXUPS];
int fixup_map_array_small[MAX_FIXUPS];
const char *errstr; const char *errstr;
const char *errstr_unpriv; const char *errstr_unpriv;
uint32_t retval, retval_unpriv, insn_processed; uint32_t retval, retval_unpriv, insn_processed;
@ -285,13 +288,15 @@ static bool skip_unsupported_map(enum bpf_map_type map_type)
return false; return false;
} }
static int create_map(uint32_t type, uint32_t size_key, static int __create_map(uint32_t type, uint32_t size_key,
uint32_t size_value, uint32_t max_elem) uint32_t size_value, uint32_t max_elem,
uint32_t extra_flags)
{ {
int fd; int fd;
fd = bpf_create_map(type, size_key, size_value, max_elem, fd = bpf_create_map(type, size_key, size_value, max_elem,
type == BPF_MAP_TYPE_HASH ? BPF_F_NO_PREALLOC : 0); (type == BPF_MAP_TYPE_HASH ?
BPF_F_NO_PREALLOC : 0) | extra_flags);
if (fd < 0) { if (fd < 0) {
if (skip_unsupported_map(type)) if (skip_unsupported_map(type))
return -1; return -1;
@ -301,6 +306,12 @@ static int create_map(uint32_t type, uint32_t size_key,
return fd; return fd;
} }
static int create_map(uint32_t type, uint32_t size_key,
uint32_t size_value, uint32_t max_elem)
{
return __create_map(type, size_key, size_value, max_elem, 0);
}
static void update_map(int fd, int index) static void update_map(int fd, int index)
{ {
struct test_val value = { struct test_val value = {
@ -527,6 +538,9 @@ static void do_test_fixup(struct bpf_test *test, enum bpf_prog_type prog_type,
int *fixup_cgroup_storage = test->fixup_cgroup_storage; int *fixup_cgroup_storage = test->fixup_cgroup_storage;
int *fixup_percpu_cgroup_storage = test->fixup_percpu_cgroup_storage; int *fixup_percpu_cgroup_storage = test->fixup_percpu_cgroup_storage;
int *fixup_map_spin_lock = test->fixup_map_spin_lock; int *fixup_map_spin_lock = test->fixup_map_spin_lock;
int *fixup_map_array_ro = test->fixup_map_array_ro;
int *fixup_map_array_wo = test->fixup_map_array_wo;
int *fixup_map_array_small = test->fixup_map_array_small;
if (test->fill_helper) { if (test->fill_helper) {
test->fill_insns = calloc(MAX_TEST_INSNS, sizeof(struct bpf_insn)); test->fill_insns = calloc(MAX_TEST_INSNS, sizeof(struct bpf_insn));
@ -652,6 +666,35 @@ static void do_test_fixup(struct bpf_test *test, enum bpf_prog_type prog_type,
fixup_map_spin_lock++; fixup_map_spin_lock++;
} while (*fixup_map_spin_lock); } while (*fixup_map_spin_lock);
} }
if (*fixup_map_array_ro) {
map_fds[14] = __create_map(BPF_MAP_TYPE_ARRAY, sizeof(int),
sizeof(struct test_val), 1,
BPF_F_RDONLY_PROG);
update_map(map_fds[14], 0);
do {
prog[*fixup_map_array_ro].imm = map_fds[14];
fixup_map_array_ro++;
} while (*fixup_map_array_ro);
}
if (*fixup_map_array_wo) {
map_fds[15] = __create_map(BPF_MAP_TYPE_ARRAY, sizeof(int),
sizeof(struct test_val), 1,
BPF_F_WRONLY_PROG);
update_map(map_fds[15], 0);
do {
prog[*fixup_map_array_wo].imm = map_fds[15];
fixup_map_array_wo++;
} while (*fixup_map_array_wo);
}
if (*fixup_map_array_small) {
map_fds[16] = __create_map(BPF_MAP_TYPE_ARRAY, sizeof(int),
1, 1, 0);
update_map(map_fds[16], 0);
do {
prog[*fixup_map_array_small].imm = map_fds[16];
fixup_map_array_small++;
} while (*fixup_map_array_small);
}
} }
static int set_admin(bool admin) static int set_admin(bool admin)

Просмотреть файл

@ -217,3 +217,162 @@
.result = REJECT, .result = REJECT,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
}, },
{
"valid read map access into a read-only array 1",
.insns = {
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
BPF_LD_MAP_FD(BPF_REG_1, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.fixup_map_array_ro = { 3 },
.result = ACCEPT,
.retval = 28,
},
{
"valid read map access into a read-only array 2",
.insns = {
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
BPF_LD_MAP_FD(BPF_REG_1, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
BPF_MOV64_IMM(BPF_REG_2, 4),
BPF_MOV64_IMM(BPF_REG_3, 0),
BPF_MOV64_IMM(BPF_REG_4, 0),
BPF_MOV64_IMM(BPF_REG_5, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
BPF_FUNC_csum_diff),
BPF_EXIT_INSN(),
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.fixup_map_array_ro = { 3 },
.result = ACCEPT,
.retval = -29,
},
{
"invalid write map access into a read-only array 1",
.insns = {
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
BPF_LD_MAP_FD(BPF_REG_1, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 42),
BPF_EXIT_INSN(),
},
.fixup_map_array_ro = { 3 },
.result = REJECT,
.errstr = "write into map forbidden",
},
{
"invalid write map access into a read-only array 2",
.insns = {
BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
BPF_LD_MAP_FD(BPF_REG_1, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
BPF_MOV64_IMM(BPF_REG_2, 0),
BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),
BPF_MOV64_IMM(BPF_REG_4, 8),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
BPF_FUNC_skb_load_bytes),
BPF_EXIT_INSN(),
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.fixup_map_array_ro = { 4 },
.result = REJECT,
.errstr = "write into map forbidden",
},
{
"valid write map access into a write-only array 1",
.insns = {
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
BPF_LD_MAP_FD(BPF_REG_1, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 42),
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_EXIT_INSN(),
},
.fixup_map_array_wo = { 3 },
.result = ACCEPT,
.retval = 1,
},
{
"valid write map access into a write-only array 2",
.insns = {
BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
BPF_LD_MAP_FD(BPF_REG_1, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
BPF_MOV64_IMM(BPF_REG_2, 0),
BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),
BPF_MOV64_IMM(BPF_REG_4, 8),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
BPF_FUNC_skb_load_bytes),
BPF_EXIT_INSN(),
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.fixup_map_array_wo = { 4 },
.result = ACCEPT,
.retval = 0,
},
{
"invalid read map access into a write-only array 1",
.insns = {
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
BPF_LD_MAP_FD(BPF_REG_1, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.fixup_map_array_wo = { 3 },
.result = REJECT,
.errstr = "read from map forbidden",
},
{
"invalid read map access into a write-only array 2",
.insns = {
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
BPF_LD_MAP_FD(BPF_REG_1, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
BPF_MOV64_IMM(BPF_REG_2, 4),
BPF_MOV64_IMM(BPF_REG_3, 0),
BPF_MOV64_IMM(BPF_REG_4, 0),
BPF_MOV64_IMM(BPF_REG_5, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
BPF_FUNC_csum_diff),
BPF_EXIT_INSN(),
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.fixup_map_array_wo = { 3 },
.result = REJECT,
.errstr = "read from map forbidden",
},

Просмотреть файл

@ -0,0 +1,347 @@
{
"direct map access, write test 1",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_LD_MAP_VALUE(BPF_REG_1, 0, 0),
BPF_ST_MEM(BPF_DW, BPF_REG_1, 0, 4242),
BPF_EXIT_INSN(),
},
.fixup_map_array_48b = { 1 },
.result = ACCEPT,
.retval = 1,
},
{
"direct map access, write test 2",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_LD_MAP_VALUE(BPF_REG_1, 0, 8),
BPF_ST_MEM(BPF_DW, BPF_REG_1, 0, 4242),
BPF_EXIT_INSN(),
},
.fixup_map_array_48b = { 1 },
.result = ACCEPT,
.retval = 1,
},
{
"direct map access, write test 3",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_LD_MAP_VALUE(BPF_REG_1, 0, 8),
BPF_ST_MEM(BPF_DW, BPF_REG_1, 8, 4242),
BPF_EXIT_INSN(),
},
.fixup_map_array_48b = { 1 },
.result = ACCEPT,
.retval = 1,
},
{
"direct map access, write test 4",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_LD_MAP_VALUE(BPF_REG_1, 0, 40),
BPF_ST_MEM(BPF_DW, BPF_REG_1, 0, 4242),
BPF_EXIT_INSN(),
},
.fixup_map_array_48b = { 1 },
.result = ACCEPT,
.retval = 1,
},
{
"direct map access, write test 5",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_LD_MAP_VALUE(BPF_REG_1, 0, 32),
BPF_ST_MEM(BPF_DW, BPF_REG_1, 8, 4242),
BPF_EXIT_INSN(),
},
.fixup_map_array_48b = { 1 },
.result = ACCEPT,
.retval = 1,
},
{
"direct map access, write test 6",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_LD_MAP_VALUE(BPF_REG_1, 0, 40),
BPF_ST_MEM(BPF_DW, BPF_REG_1, 4, 4242),
BPF_EXIT_INSN(),
},
.fixup_map_array_48b = { 1 },
.result = REJECT,
.errstr = "R1 min value is outside of the array range",
},
{
"direct map access, write test 7",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_LD_MAP_VALUE(BPF_REG_1, 0, -1),
BPF_ST_MEM(BPF_DW, BPF_REG_1, 4, 4242),
BPF_EXIT_INSN(),
},
.fixup_map_array_48b = { 1 },
.result = REJECT,
.errstr = "direct value offset of 4294967295 is not allowed",
},
{
"direct map access, write test 8",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_LD_MAP_VALUE(BPF_REG_1, 0, 1),
BPF_ST_MEM(BPF_DW, BPF_REG_1, -1, 4242),
BPF_EXIT_INSN(),
},
.fixup_map_array_48b = { 1 },
.result = ACCEPT,
.retval = 1,
},
{
"direct map access, write test 9",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_LD_MAP_VALUE(BPF_REG_1, 0, 48),
BPF_ST_MEM(BPF_DW, BPF_REG_1, 0, 4242),
BPF_EXIT_INSN(),
},
.fixup_map_array_48b = { 1 },
.result = REJECT,
.errstr = "invalid access to map value pointer",
},
{
"direct map access, write test 10",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_LD_MAP_VALUE(BPF_REG_1, 0, 47),
BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 4),
BPF_EXIT_INSN(),
},
.fixup_map_array_48b = { 1 },
.result = ACCEPT,
.retval = 1,
},
{
"direct map access, write test 11",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_LD_MAP_VALUE(BPF_REG_1, 0, 48),
BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 4),
BPF_EXIT_INSN(),
},
.fixup_map_array_48b = { 1 },
.result = REJECT,
.errstr = "invalid access to map value pointer",
},
{
"direct map access, write test 12",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_LD_MAP_VALUE(BPF_REG_1, 0, (1<<29)),
BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 4),
BPF_EXIT_INSN(),
},
.fixup_map_array_48b = { 1 },
.result = REJECT,
.errstr = "direct value offset of 536870912 is not allowed",
},
{
"direct map access, write test 13",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_LD_MAP_VALUE(BPF_REG_1, 0, (1<<29)-1),
BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 4),
BPF_EXIT_INSN(),
},
.fixup_map_array_48b = { 1 },
.result = REJECT,
.errstr = "invalid access to map value pointer, value_size=48 off=536870911",
},
{
"direct map access, write test 14",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_LD_MAP_VALUE(BPF_REG_1, 0, 47),
BPF_LD_MAP_VALUE(BPF_REG_2, 0, 46),
BPF_ST_MEM(BPF_H, BPF_REG_2, 0, 0xffff),
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0),
BPF_EXIT_INSN(),
},
.fixup_map_array_48b = { 1, 3 },
.result = ACCEPT,
.retval = 0xff,
},
{
"direct map access, write test 15",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_LD_MAP_VALUE(BPF_REG_1, 0, 46),
BPF_LD_MAP_VALUE(BPF_REG_2, 0, 46),
BPF_ST_MEM(BPF_H, BPF_REG_2, 0, 0xffff),
BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_1, 0),
BPF_EXIT_INSN(),
},
.fixup_map_array_48b = { 1, 3 },
.result = ACCEPT,
.retval = 0xffff,
},
{
"direct map access, write test 16",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_LD_MAP_VALUE(BPF_REG_1, 0, 46),
BPF_LD_MAP_VALUE(BPF_REG_2, 0, 47),
BPF_ST_MEM(BPF_H, BPF_REG_2, 0, 0xffff),
BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_1, 0),
BPF_EXIT_INSN(),
},
.fixup_map_array_48b = { 1, 3 },
.result = REJECT,
.errstr = "invalid access to map value, value_size=48 off=47 size=2",
},
{
"direct map access, write test 17",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_LD_MAP_VALUE(BPF_REG_1, 0, 46),
BPF_LD_MAP_VALUE(BPF_REG_2, 0, 46),
BPF_ST_MEM(BPF_H, BPF_REG_2, 1, 0xffff),
BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_1, 0),
BPF_EXIT_INSN(),
},
.fixup_map_array_48b = { 1, 3 },
.result = REJECT,
.errstr = "invalid access to map value, value_size=48 off=47 size=2",
},
{
"direct map access, write test 18",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_LD_MAP_VALUE(BPF_REG_1, 0, 0),
BPF_ST_MEM(BPF_H, BPF_REG_1, 0, 42),
BPF_EXIT_INSN(),
},
.fixup_map_array_small = { 1 },
.result = REJECT,
.errstr = "R1 min value is outside of the array range",
},
{
"direct map access, write test 19",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_LD_MAP_VALUE(BPF_REG_1, 0, 0),
BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 42),
BPF_EXIT_INSN(),
},
.fixup_map_array_small = { 1 },
.result = ACCEPT,
.retval = 1,
},
{
"direct map access, write test 20",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_LD_MAP_VALUE(BPF_REG_1, 0, 1),
BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 42),
BPF_EXIT_INSN(),
},
.fixup_map_array_small = { 1 },
.result = REJECT,
.errstr = "invalid access to map value pointer",
},
{
"direct map access, invalid insn test 1",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_LD_IMM64_RAW_FULL(BPF_REG_1, BPF_PSEUDO_MAP_VALUE, 0, 1, 0, 47),
BPF_EXIT_INSN(),
},
.fixup_map_array_48b = { 1 },
.result = REJECT,
.errstr = "invalid bpf_ld_imm64 insn",
},
{
"direct map access, invalid insn test 2",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_LD_IMM64_RAW_FULL(BPF_REG_1, BPF_PSEUDO_MAP_VALUE, 1, 0, 0, 47),
BPF_EXIT_INSN(),
},
.fixup_map_array_48b = { 1 },
.result = REJECT,
.errstr = "BPF_LD_IMM64 uses reserved fields",
},
{
"direct map access, invalid insn test 3",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_LD_IMM64_RAW_FULL(BPF_REG_1, BPF_PSEUDO_MAP_VALUE, ~0, 0, 0, 47),
BPF_EXIT_INSN(),
},
.fixup_map_array_48b = { 1 },
.result = REJECT,
.errstr = "BPF_LD_IMM64 uses reserved fields",
},
{
"direct map access, invalid insn test 4",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_LD_IMM64_RAW_FULL(BPF_REG_1, BPF_PSEUDO_MAP_VALUE, 0, ~0, 0, 47),
BPF_EXIT_INSN(),
},
.fixup_map_array_48b = { 1 },
.result = REJECT,
.errstr = "invalid bpf_ld_imm64 insn",
},
{
"direct map access, invalid insn test 5",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_LD_IMM64_RAW_FULL(BPF_REG_1, BPF_PSEUDO_MAP_VALUE, ~0, ~0, 0, 47),
BPF_EXIT_INSN(),
},
.fixup_map_array_48b = { 1 },
.result = REJECT,
.errstr = "invalid bpf_ld_imm64 insn",
},
{
"direct map access, invalid insn test 6",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_LD_IMM64_RAW_FULL(BPF_REG_1, BPF_PSEUDO_MAP_FD, ~0, 0, 0, 0),
BPF_EXIT_INSN(),
},
.fixup_map_array_48b = { 1 },
.result = REJECT,
.errstr = "BPF_LD_IMM64 uses reserved fields",
},
{
"direct map access, invalid insn test 7",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_LD_IMM64_RAW_FULL(BPF_REG_1, BPF_PSEUDO_MAP_FD, 0, ~0, 0, 0),
BPF_EXIT_INSN(),
},
.fixup_map_array_48b = { 1 },
.result = REJECT,
.errstr = "invalid bpf_ld_imm64 insn",
},
{
"direct map access, invalid insn test 8",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_LD_IMM64_RAW_FULL(BPF_REG_1, BPF_PSEUDO_MAP_FD, ~0, ~0, 0, 0),
BPF_EXIT_INSN(),
},
.fixup_map_array_48b = { 1 },
.result = REJECT,
.errstr = "invalid bpf_ld_imm64 insn",
},
{
"direct map access, invalid insn test 9",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_LD_IMM64_RAW_FULL(BPF_REG_1, BPF_PSEUDO_MAP_FD, 0, 0, 0, 47),
BPF_EXIT_INSN(),
},
.fixup_map_array_48b = { 1 },
.result = REJECT,
.errstr = "unrecognized bpf_ld_imm64 insn",
},