Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
Daniel Borkmann says: ==================== pull-request: bpf-next 2019-01-29 The following pull-request contains BPF updates for your *net-next* tree. The main changes are: 1) Teach verifier dead code removal, this also allows for optimizing / removing conditional branches around dead code and to shrink the resulting image. Code store constrained architectures like nfp would have hard time doing this at JIT level, from Jakub. 2) Add JMP32 instructions to BPF ISA in order to allow for optimizing code generation for 32-bit sub-registers. Evaluation shows that this can result in code reduction of ~5-20% compared to 64 bit-only code generation. Also add implementation for most JITs, from Jiong. 3) Add support for __int128 types in BTF which is also needed for vmlinux's BTF conversion to work, from Yonghong. 4) Add a new command to bpftool in order to dump a list of BPF-related parameters from the system or for a specific network device e.g. in terms of available prog/map types or helper functions, from Quentin. 5) Add AF_XDP sock_diag interface for querying sockets from user space which provides information about the RX/TX/fill/completion rings, umem, memory usage etc, from Björn. 6) Add skb context access for skb_shared_info->gso_segs field, from Eric. 7) Add support for testing flow dissector BPF programs by extending existing BPF_PROG_TEST_RUN infrastructure, from Stanislav. 8) Split BPF kselftest's test_verifier into various subgroups of tests in order better deal with merge conflicts in this area, from Jakub. 9) Add support for queue/stack manipulations in bpftool, from Stanislav. 10) Document BTF, from Yonghong. 11) Dump supported ELF section names in libbpf on program load failure, from Taeung. 12) Silence a false positive compiler warning in verifier's BTF handling, from Peter. 13) Fix help string in bpftool's feature probing, from Prashant. 14) Remove duplicate includes in BPF kselftests, from Yue. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
Коммит
ec7146db15
|
@ -0,0 +1,870 @@
|
|||
=====================
|
||||
BPF Type Format (BTF)
|
||||
=====================
|
||||
|
||||
1. Introduction
|
||||
***************
|
||||
|
||||
BTF (BPF Type Format) is the meta data format which
|
||||
encodes the debug info related to BPF program/map.
|
||||
The name BTF was used initially to describe
|
||||
data types. The BTF was later extended to include
|
||||
function info for defined subroutines, and line info
|
||||
for source/line information.
|
||||
|
||||
The debug info is used for map pretty print, function
|
||||
signature, etc. The function signature enables better
|
||||
bpf program/function kernel symbol.
|
||||
The line info helps generate
|
||||
source annotated translated byte code, jited code
|
||||
and verifier log.
|
||||
|
||||
The BTF specification contains two parts,
|
||||
* BTF kernel API
|
||||
* BTF ELF file format
|
||||
|
||||
The kernel API is the contract between
|
||||
user space and kernel. The kernel verifies
|
||||
the BTF info before using it.
|
||||
The ELF file format is a user space contract
|
||||
between ELF file and libbpf loader.
|
||||
|
||||
The type and string sections are part of the
|
||||
BTF kernel API, describing the debug info
|
||||
(mostly types related) referenced by the bpf program.
|
||||
These two sections are discussed in
|
||||
details in :ref:`BTF_Type_String`.
|
||||
|
||||
.. _BTF_Type_String:
|
||||
|
||||
2. BTF Type and String Encoding
|
||||
*******************************
|
||||
|
||||
The file ``include/uapi/linux/btf.h`` provides high
|
||||
level definition on how types/strings are encoded.
|
||||
|
||||
The beginning of data blob must be::
|
||||
|
||||
struct btf_header {
|
||||
__u16 magic;
|
||||
__u8 version;
|
||||
__u8 flags;
|
||||
__u32 hdr_len;
|
||||
|
||||
/* All offsets are in bytes relative to the end of this header */
|
||||
__u32 type_off; /* offset of type section */
|
||||
__u32 type_len; /* length of type section */
|
||||
__u32 str_off; /* offset of string section */
|
||||
__u32 str_len; /* length of string section */
|
||||
};
|
||||
|
||||
The magic is ``0xeB9F``, which has different encoding for big and little
|
||||
endian system, and can be used to test whether BTF is generated for
|
||||
big or little endian target.
|
||||
The btf_header is designed to be extensible with hdr_len equal to
|
||||
``sizeof(struct btf_header)`` when the data blob is generated.
|
||||
|
||||
2.1 String Encoding
|
||||
===================
|
||||
|
||||
The first string in the string section must be a null string.
|
||||
The rest of string table is a concatenation of other null-treminated
|
||||
strings.
|
||||
|
||||
2.2 Type Encoding
|
||||
=================
|
||||
|
||||
The type id ``0`` is reserved for ``void`` type.
|
||||
The type section is parsed sequentially and the type id is assigned to
|
||||
each recognized type starting from id ``1``.
|
||||
Currently, the following types are supported::
|
||||
|
||||
#define BTF_KIND_INT 1 /* Integer */
|
||||
#define BTF_KIND_PTR 2 /* Pointer */
|
||||
#define BTF_KIND_ARRAY 3 /* Array */
|
||||
#define BTF_KIND_STRUCT 4 /* Struct */
|
||||
#define BTF_KIND_UNION 5 /* Union */
|
||||
#define BTF_KIND_ENUM 6 /* Enumeration */
|
||||
#define BTF_KIND_FWD 7 /* Forward */
|
||||
#define BTF_KIND_TYPEDEF 8 /* Typedef */
|
||||
#define BTF_KIND_VOLATILE 9 /* Volatile */
|
||||
#define BTF_KIND_CONST 10 /* Const */
|
||||
#define BTF_KIND_RESTRICT 11 /* Restrict */
|
||||
#define BTF_KIND_FUNC 12 /* Function */
|
||||
#define BTF_KIND_FUNC_PROTO 13 /* Function Proto */
|
||||
|
||||
Note that the type section encodes debug info, not just pure types.
|
||||
``BTF_KIND_FUNC`` is not a type, and it represents a defined subprogram.
|
||||
|
||||
Each type contains the following common data::
|
||||
|
||||
struct btf_type {
|
||||
__u32 name_off;
|
||||
/* "info" bits arrangement
|
||||
* bits 0-15: vlen (e.g. # of struct's members)
|
||||
* bits 16-23: unused
|
||||
* bits 24-27: kind (e.g. int, ptr, array...etc)
|
||||
* bits 28-30: unused
|
||||
* bit 31: kind_flag, currently used by
|
||||
* struct, union and fwd
|
||||
*/
|
||||
__u32 info;
|
||||
/* "size" is used by INT, ENUM, STRUCT and UNION.
|
||||
* "size" tells the size of the type it is describing.
|
||||
*
|
||||
* "type" is used by PTR, TYPEDEF, VOLATILE, CONST, RESTRICT,
|
||||
* FUNC and FUNC_PROTO.
|
||||
* "type" is a type_id referring to another type.
|
||||
*/
|
||||
union {
|
||||
__u32 size;
|
||||
__u32 type;
|
||||
};
|
||||
};
|
||||
|
||||
For certain kinds, the common data are followed by kind specific data.
|
||||
The ``name_off`` in ``struct btf_type`` specifies the offset in the string table.
|
||||
The following details encoding of each kind.
|
||||
|
||||
2.2.1 BTF_KIND_INT
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``struct btf_type`` encoding requirement:
|
||||
* ``name_off``: any valid offset
|
||||
* ``info.kind_flag``: 0
|
||||
* ``info.kind``: BTF_KIND_INT
|
||||
* ``info.vlen``: 0
|
||||
* ``size``: the size of the int type in bytes.
|
||||
|
||||
``btf_type`` is followed by a ``u32`` with following bits arrangement::
|
||||
|
||||
#define BTF_INT_ENCODING(VAL) (((VAL) & 0x0f000000) >> 24)
|
||||
#define BTF_INT_OFFSET(VAL) (((VAL & 0x00ff0000)) >> 16)
|
||||
#define BTF_INT_BITS(VAL) ((VAL) & 0x000000ff)
|
||||
|
||||
The ``BTF_INT_ENCODING`` has the following attributes::
|
||||
|
||||
#define BTF_INT_SIGNED (1 << 0)
|
||||
#define BTF_INT_CHAR (1 << 1)
|
||||
#define BTF_INT_BOOL (1 << 2)
|
||||
|
||||
The ``BTF_INT_ENCODING()`` provides extra information, signness,
|
||||
char, or bool, for the int type. The char and bool encoding
|
||||
are mostly useful for pretty print. At most one encoding can
|
||||
be specified for the int type.
|
||||
|
||||
The ``BTF_INT_BITS()`` specifies the number of actual bits held by
|
||||
this int type. For example, a 4-bit bitfield encodes
|
||||
``BTF_INT_BITS()`` equals to 4. The ``btf_type.size * 8``
|
||||
must be equal to or greater than ``BTF_INT_BITS()`` for the type.
|
||||
The maximum value of ``BTF_INT_BITS()`` is 128.
|
||||
|
||||
The ``BTF_INT_OFFSET()`` specifies the starting bit offset to
|
||||
calculate values for this int. For example, a bitfield struct
|
||||
member has
|
||||
|
||||
* btf member bit offset 100 from the start of the structure,
|
||||
* btf member pointing to an int type,
|
||||
* the int type has ``BTF_INT_OFFSET() = 2`` and ``BTF_INT_BITS() = 4``
|
||||
|
||||
Then in the struct memory layout, this member will occupy
|
||||
``4`` bits starting from bits ``100 + 2 = 102``.
|
||||
|
||||
Alternatively, the bitfield struct member can be the following to
|
||||
access the same bits as the above:
|
||||
|
||||
* btf member bit offset 102,
|
||||
* btf member pointing to an int type,
|
||||
* the int type has ``BTF_INT_OFFSET() = 0`` and ``BTF_INT_BITS() = 4``
|
||||
|
||||
The original intention of ``BTF_INT_OFFSET()`` is to provide
|
||||
flexibility of bitfield encoding.
|
||||
Currently, both llvm and pahole generates ``BTF_INT_OFFSET() = 0``
|
||||
for all int types.
|
||||
|
||||
2.2.2 BTF_KIND_PTR
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``struct btf_type`` encoding requirement:
|
||||
* ``name_off``: 0
|
||||
* ``info.kind_flag``: 0
|
||||
* ``info.kind``: BTF_KIND_PTR
|
||||
* ``info.vlen``: 0
|
||||
* ``type``: the pointee type of the pointer
|
||||
|
||||
No additional type data follow ``btf_type``.
|
||||
|
||||
2.2.3 BTF_KIND_ARRAY
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``struct btf_type`` encoding requirement:
|
||||
* ``name_off``: 0
|
||||
* ``info.kind_flag``: 0
|
||||
* ``info.kind``: BTF_KIND_ARRAY
|
||||
* ``info.vlen``: 0
|
||||
* ``size/type``: 0, not used
|
||||
|
||||
btf_type is followed by one "struct btf_array"::
|
||||
|
||||
struct btf_array {
|
||||
__u32 type;
|
||||
__u32 index_type;
|
||||
__u32 nelems;
|
||||
};
|
||||
|
||||
The ``struct btf_array`` encoding:
|
||||
* ``type``: the element type
|
||||
* ``index_type``: the index type
|
||||
* ``nelems``: the number of elements for this array (``0`` is also allowed).
|
||||
|
||||
The ``index_type`` can be any regular int types
|
||||
(u8, u16, u32, u64, unsigned __int128).
|
||||
The original design of including ``index_type`` follows dwarf
|
||||
which has a ``index_type`` for its array type.
|
||||
Currently in BTF, beyond type verification, the ``index_type`` is not used.
|
||||
|
||||
The ``struct btf_array`` allows chaining through element type to represent
|
||||
multiple dimensional arrays. For example, ``int a[5][6]``, the following
|
||||
type system illustrates the chaining:
|
||||
|
||||
* [1]: int
|
||||
* [2]: array, ``btf_array.type = [1]``, ``btf_array.nelems = 6``
|
||||
* [3]: array, ``btf_array.type = [2]``, ``btf_array.nelems = 5``
|
||||
|
||||
Currently, both pahole and llvm collapse multiple dimensional array
|
||||
into one dimensional array, e.g., ``a[5][6]``, the btf_array.nelems
|
||||
equal to ``30``. This is because the original use case is map pretty
|
||||
print where the whole array is dumped out so one dimensional array
|
||||
is enough. As more BTF usage is explored, pahole and llvm can be
|
||||
changed to generate proper chained representation for
|
||||
multiple dimensional arrays.
|
||||
|
||||
2.2.4 BTF_KIND_STRUCT
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
2.2.5 BTF_KIND_UNION
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``struct btf_type`` encoding requirement:
|
||||
* ``name_off``: 0 or offset to a valid C identifier
|
||||
* ``info.kind_flag``: 0 or 1
|
||||
* ``info.kind``: BTF_KIND_STRUCT or BTF_KIND_UNION
|
||||
* ``info.vlen``: the number of struct/union members
|
||||
* ``info.size``: the size of the struct/union in bytes
|
||||
|
||||
``btf_type`` is followed by ``info.vlen`` number of ``struct btf_member``.::
|
||||
|
||||
struct btf_member {
|
||||
__u32 name_off;
|
||||
__u32 type;
|
||||
__u32 offset;
|
||||
};
|
||||
|
||||
``struct btf_member`` encoding:
|
||||
* ``name_off``: offset to a valid C identifier
|
||||
* ``type``: the member type
|
||||
* ``offset``: <see below>
|
||||
|
||||
If the type info ``kind_flag`` is not set, the offset contains
|
||||
only bit offset of the member. Note that the base type of the
|
||||
bitfield can only be int or enum type. If the bitfield size
|
||||
is 32, the base type can be either int or enum type.
|
||||
If the bitfield size is not 32, the base type must be int,
|
||||
and int type ``BTF_INT_BITS()`` encodes the bitfield size.
|
||||
|
||||
If the ``kind_flag`` is set, the ``btf_member.offset``
|
||||
contains both member bitfield size and bit offset. The
|
||||
bitfield size and bit offset are calculated as below.::
|
||||
|
||||
#define BTF_MEMBER_BITFIELD_SIZE(val) ((val) >> 24)
|
||||
#define BTF_MEMBER_BIT_OFFSET(val) ((val) & 0xffffff)
|
||||
|
||||
In this case, if the base type is an int type, it must
|
||||
be a regular int type:
|
||||
|
||||
* ``BTF_INT_OFFSET()`` must be 0.
|
||||
* ``BTF_INT_BITS()`` must be equal to ``{1,2,4,8,16} * 8``.
|
||||
|
||||
The following kernel patch introduced ``kind_flag`` and
|
||||
explained why both modes exist:
|
||||
|
||||
https://github.com/torvalds/linux/commit/9d5f9f701b1891466fb3dbb1806ad97716f95cc3#diff-fa650a64fdd3968396883d2fe8215ff3
|
||||
|
||||
2.2.6 BTF_KIND_ENUM
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``struct btf_type`` encoding requirement:
|
||||
* ``name_off``: 0 or offset to a valid C identifier
|
||||
* ``info.kind_flag``: 0
|
||||
* ``info.kind``: BTF_KIND_ENUM
|
||||
* ``info.vlen``: number of enum values
|
||||
* ``size``: 4
|
||||
|
||||
``btf_type`` is followed by ``info.vlen`` number of ``struct btf_enum``.::
|
||||
|
||||
struct btf_enum {
|
||||
__u32 name_off;
|
||||
__s32 val;
|
||||
};
|
||||
|
||||
The ``btf_enum`` encoding:
|
||||
* ``name_off``: offset to a valid C identifier
|
||||
* ``val``: any value
|
||||
|
||||
2.2.7 BTF_KIND_FWD
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``struct btf_type`` encoding requirement:
|
||||
* ``name_off``: offset to a valid C identifier
|
||||
* ``info.kind_flag``: 0 for struct, 1 for union
|
||||
* ``info.kind``: BTF_KIND_FWD
|
||||
* ``info.vlen``: 0
|
||||
* ``type``: 0
|
||||
|
||||
No additional type data follow ``btf_type``.
|
||||
|
||||
2.2.8 BTF_KIND_TYPEDEF
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``struct btf_type`` encoding requirement:
|
||||
* ``name_off``: offset to a valid C identifier
|
||||
* ``info.kind_flag``: 0
|
||||
* ``info.kind``: BTF_KIND_TYPEDEF
|
||||
* ``info.vlen``: 0
|
||||
* ``type``: the type which can be referred by name at ``name_off``
|
||||
|
||||
No additional type data follow ``btf_type``.
|
||||
|
||||
2.2.9 BTF_KIND_VOLATILE
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``struct btf_type`` encoding requirement:
|
||||
* ``name_off``: 0
|
||||
* ``info.kind_flag``: 0
|
||||
* ``info.kind``: BTF_KIND_VOLATILE
|
||||
* ``info.vlen``: 0
|
||||
* ``type``: the type with ``volatile`` qualifier
|
||||
|
||||
No additional type data follow ``btf_type``.
|
||||
|
||||
2.2.10 BTF_KIND_CONST
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``struct btf_type`` encoding requirement:
|
||||
* ``name_off``: 0
|
||||
* ``info.kind_flag``: 0
|
||||
* ``info.kind``: BTF_KIND_CONST
|
||||
* ``info.vlen``: 0
|
||||
* ``type``: the type with ``const`` qualifier
|
||||
|
||||
No additional type data follow ``btf_type``.
|
||||
|
||||
2.2.11 BTF_KIND_RESTRICT
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``struct btf_type`` encoding requirement:
|
||||
* ``name_off``: 0
|
||||
* ``info.kind_flag``: 0
|
||||
* ``info.kind``: BTF_KIND_RESTRICT
|
||||
* ``info.vlen``: 0
|
||||
* ``type``: the type with ``restrict`` qualifier
|
||||
|
||||
No additional type data follow ``btf_type``.
|
||||
|
||||
2.2.12 BTF_KIND_FUNC
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``struct btf_type`` encoding requirement:
|
||||
* ``name_off``: offset to a valid C identifier
|
||||
* ``info.kind_flag``: 0
|
||||
* ``info.kind``: BTF_KIND_FUNC
|
||||
* ``info.vlen``: 0
|
||||
* ``type``: a BTF_KIND_FUNC_PROTO type
|
||||
|
||||
No additional type data follow ``btf_type``.
|
||||
|
||||
A BTF_KIND_FUNC defines, not a type, but a subprogram (function) whose
|
||||
signature is defined by ``type``. The subprogram is thus an instance of
|
||||
that type. The BTF_KIND_FUNC may in turn be referenced by a func_info in
|
||||
the :ref:`BTF_Ext_Section` (ELF) or in the arguments to
|
||||
:ref:`BPF_Prog_Load` (ABI).
|
||||
|
||||
2.2.13 BTF_KIND_FUNC_PROTO
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``struct btf_type`` encoding requirement:
|
||||
* ``name_off``: 0
|
||||
* ``info.kind_flag``: 0
|
||||
* ``info.kind``: BTF_KIND_FUNC_PROTO
|
||||
* ``info.vlen``: # of parameters
|
||||
* ``type``: the return type
|
||||
|
||||
``btf_type`` is followed by ``info.vlen`` number of ``struct btf_param``.::
|
||||
|
||||
struct btf_param {
|
||||
__u32 name_off;
|
||||
__u32 type;
|
||||
};
|
||||
|
||||
If a BTF_KIND_FUNC_PROTO type is referred by a BTF_KIND_FUNC type,
|
||||
then ``btf_param.name_off`` must point to a valid C identifier
|
||||
except for the possible last argument representing the variable
|
||||
argument. The btf_param.type refers to parameter type.
|
||||
|
||||
If the function has variable arguments, the last parameter
|
||||
is encoded with ``name_off = 0`` and ``type = 0``.
|
||||
|
||||
3. BTF Kernel API
|
||||
*****************
|
||||
|
||||
The following bpf syscall command involves BTF:
|
||||
* BPF_BTF_LOAD: load a blob of BTF data into kernel
|
||||
* BPF_MAP_CREATE: map creation with btf key and value type info.
|
||||
* BPF_PROG_LOAD: prog load with btf function and line info.
|
||||
* BPF_BTF_GET_FD_BY_ID: get a btf fd
|
||||
* BPF_OBJ_GET_INFO_BY_FD: btf, func_info, line_info
|
||||
and other btf related info are returned.
|
||||
|
||||
The workflow typically looks like:
|
||||
::
|
||||
|
||||
Application:
|
||||
BPF_BTF_LOAD
|
||||
|
|
||||
v
|
||||
BPF_MAP_CREATE and BPF_PROG_LOAD
|
||||
|
|
||||
V
|
||||
......
|
||||
|
||||
Introspection tool:
|
||||
......
|
||||
BPF_{PROG,MAP}_GET_NEXT_ID (get prog/map id's)
|
||||
|
|
||||
V
|
||||
BPF_{PROG,MAP}_GET_FD_BY_ID (get a prog/map fd)
|
||||
|
|
||||
V
|
||||
BPF_OBJ_GET_INFO_BY_FD (get bpf_prog_info/bpf_map_info with btf_id)
|
||||
| |
|
||||
V |
|
||||
BPF_BTF_GET_FD_BY_ID (get btf_fd) |
|
||||
| |
|
||||
V |
|
||||
BPF_OBJ_GET_INFO_BY_FD (get btf) |
|
||||
| |
|
||||
V V
|
||||
pretty print types, dump func signatures and line info, etc.
|
||||
|
||||
|
||||
3.1 BPF_BTF_LOAD
|
||||
================
|
||||
|
||||
Load a blob of BTF data into kernel. A blob of data
|
||||
described in :ref:`BTF_Type_String`
|
||||
can be directly loaded into the kernel.
|
||||
A ``btf_fd`` returns to userspace.
|
||||
|
||||
3.2 BPF_MAP_CREATE
|
||||
==================
|
||||
|
||||
A map can be created with ``btf_fd`` and specified key/value type id.::
|
||||
|
||||
__u32 btf_fd; /* fd pointing to a BTF type data */
|
||||
__u32 btf_key_type_id; /* BTF type_id of the key */
|
||||
__u32 btf_value_type_id; /* BTF type_id of the value */
|
||||
|
||||
In libbpf, the map can be defined with extra annotation like below:
|
||||
::
|
||||
|
||||
struct bpf_map_def SEC("maps") btf_map = {
|
||||
.type = BPF_MAP_TYPE_ARRAY,
|
||||
.key_size = sizeof(int),
|
||||
.value_size = sizeof(struct ipv_counts),
|
||||
.max_entries = 4,
|
||||
};
|
||||
BPF_ANNOTATE_KV_PAIR(btf_map, int, struct ipv_counts);
|
||||
|
||||
Here, the parameters for macro BPF_ANNOTATE_KV_PAIR are map name,
|
||||
key and value types for the map.
|
||||
During ELF parsing, libbpf is able to extract key/value type_id's
|
||||
and assigned them to BPF_MAP_CREATE attributes automatically.
|
||||
|
||||
.. _BPF_Prog_Load:
|
||||
|
||||
3.3 BPF_PROG_LOAD
|
||||
=================
|
||||
|
||||
During prog_load, func_info and line_info can be passed to kernel with
|
||||
proper values for the following attributes:
|
||||
::
|
||||
|
||||
__u32 insn_cnt;
|
||||
__aligned_u64 insns;
|
||||
......
|
||||
__u32 prog_btf_fd; /* fd pointing to BTF type data */
|
||||
__u32 func_info_rec_size; /* userspace bpf_func_info size */
|
||||
__aligned_u64 func_info; /* func info */
|
||||
__u32 func_info_cnt; /* number of bpf_func_info records */
|
||||
__u32 line_info_rec_size; /* userspace bpf_line_info size */
|
||||
__aligned_u64 line_info; /* line info */
|
||||
__u32 line_info_cnt; /* number of bpf_line_info records */
|
||||
|
||||
The func_info and line_info are an array of below, respectively.::
|
||||
|
||||
struct bpf_func_info {
|
||||
__u32 insn_off; /* [0, insn_cnt - 1] */
|
||||
__u32 type_id; /* pointing to a BTF_KIND_FUNC type */
|
||||
};
|
||||
struct bpf_line_info {
|
||||
__u32 insn_off; /* [0, insn_cnt - 1] */
|
||||
__u32 file_name_off; /* offset to string table for the filename */
|
||||
__u32 line_off; /* offset to string table for the source line */
|
||||
__u32 line_col; /* line number and column number */
|
||||
};
|
||||
|
||||
func_info_rec_size is the size of each func_info record, and line_info_rec_size
|
||||
is the size of each line_info record. Passing the record size to kernel make
|
||||
it possible to extend the record itself in the future.
|
||||
|
||||
Below are requirements for func_info:
|
||||
* func_info[0].insn_off must be 0.
|
||||
* the func_info insn_off is in strictly increasing order and matches
|
||||
bpf func boundaries.
|
||||
|
||||
Below are requirements for line_info:
|
||||
* the first insn in each func must points to a line_info record.
|
||||
* the line_info insn_off is in strictly increasing order.
|
||||
|
||||
For line_info, the line number and column number are defined as below:
|
||||
::
|
||||
|
||||
#define BPF_LINE_INFO_LINE_NUM(line_col) ((line_col) >> 10)
|
||||
#define BPF_LINE_INFO_LINE_COL(line_col) ((line_col) & 0x3ff)
|
||||
|
||||
3.4 BPF_{PROG,MAP}_GET_NEXT_ID
|
||||
|
||||
In kernel, every loaded program, map or btf has a unique id.
|
||||
The id won't change during the life time of the program, map or btf.
|
||||
|
||||
The bpf syscall command BPF_{PROG,MAP}_GET_NEXT_ID
|
||||
returns all id's, one for each command, to user space, for bpf
|
||||
program or maps,
|
||||
so the inspection tool can inspect all programs and maps.
|
||||
|
||||
3.5 BPF_{PROG,MAP}_GET_FD_BY_ID
|
||||
|
||||
The introspection tool cannot use id to get details about program or maps.
|
||||
A file descriptor needs to be obtained first for reference counting purpose.
|
||||
|
||||
3.6 BPF_OBJ_GET_INFO_BY_FD
|
||||
==========================
|
||||
|
||||
Once a program/map fd is acquired, the introspection tool can
|
||||
get the detailed information from kernel about this fd,
|
||||
some of which is btf related. For example,
|
||||
``bpf_map_info`` returns ``btf_id``, key/value type id.
|
||||
``bpf_prog_info`` returns ``btf_id``, func_info and line info
|
||||
for translated bpf byte codes, and jited_line_info.
|
||||
|
||||
3.7 BPF_BTF_GET_FD_BY_ID
|
||||
========================
|
||||
|
||||
With ``btf_id`` obtained in ``bpf_map_info`` and ``bpf_prog_info``,
|
||||
bpf syscall command BPF_BTF_GET_FD_BY_ID can retrieve a btf fd.
|
||||
Then, with command BPF_OBJ_GET_INFO_BY_FD, the btf blob, originally
|
||||
loaded into the kernel with BPF_BTF_LOAD, can be retrieved.
|
||||
|
||||
With the btf blob, ``bpf_map_info`` and ``bpf_prog_info``, the introspection
|
||||
tool has full btf knowledge and is able to pretty print map key/values,
|
||||
dump func signatures, dump line info along with byte/jit codes.
|
||||
|
||||
4. ELF File Format Interface
|
||||
****************************
|
||||
|
||||
4.1 .BTF section
|
||||
================
|
||||
|
||||
The .BTF section contains type and string data. The format of this section
|
||||
is same as the one describe in :ref:`BTF_Type_String`.
|
||||
|
||||
.. _BTF_Ext_Section:
|
||||
|
||||
4.2 .BTF.ext section
|
||||
====================
|
||||
|
||||
The .BTF.ext section encodes func_info and line_info which
|
||||
needs loader manipulation before loading into the kernel.
|
||||
|
||||
The specification for .BTF.ext section is defined at
|
||||
``tools/lib/bpf/btf.h`` and ``tools/lib/bpf/btf.c``.
|
||||
|
||||
The current header of .BTF.ext section::
|
||||
|
||||
struct btf_ext_header {
|
||||
__u16 magic;
|
||||
__u8 version;
|
||||
__u8 flags;
|
||||
__u32 hdr_len;
|
||||
|
||||
/* All offsets are in bytes relative to the end of this header */
|
||||
__u32 func_info_off;
|
||||
__u32 func_info_len;
|
||||
__u32 line_info_off;
|
||||
__u32 line_info_len;
|
||||
};
|
||||
|
||||
It is very similar to .BTF section. Instead of type/string section,
|
||||
it contains func_info and line_info section. See :ref:`BPF_Prog_Load`
|
||||
for details about func_info and line_info record format.
|
||||
|
||||
The func_info is organized as below.::
|
||||
|
||||
func_info_rec_size
|
||||
btf_ext_info_sec for section #1 /* func_info for section #1 */
|
||||
btf_ext_info_sec for section #2 /* func_info for section #2 */
|
||||
...
|
||||
|
||||
``func_info_rec_size`` specifies the size of ``bpf_func_info`` structure
|
||||
when .BTF.ext is generated. btf_ext_info_sec, defined below, is
|
||||
the func_info for each specific ELF section.::
|
||||
|
||||
struct btf_ext_info_sec {
|
||||
__u32 sec_name_off; /* offset to section name */
|
||||
__u32 num_info;
|
||||
/* Followed by num_info * record_size number of bytes */
|
||||
__u8 data[0];
|
||||
};
|
||||
|
||||
Here, num_info must be greater than 0.
|
||||
|
||||
The line_info is organized as below.::
|
||||
|
||||
line_info_rec_size
|
||||
btf_ext_info_sec for section #1 /* line_info for section #1 */
|
||||
btf_ext_info_sec for section #2 /* line_info for section #2 */
|
||||
...
|
||||
|
||||
``line_info_rec_size`` specifies the size of ``bpf_line_info`` structure
|
||||
when .BTF.ext is generated.
|
||||
|
||||
The interpretation of ``bpf_func_info->insn_off`` and
|
||||
``bpf_line_info->insn_off`` is different between kernel API and ELF API.
|
||||
For kernel API, the ``insn_off`` is the instruction offset in the unit
|
||||
of ``struct bpf_insn``. For ELF API, the ``insn_off`` is the byte offset
|
||||
from the beginning of section (``btf_ext_info_sec->sec_name_off``).
|
||||
|
||||
5. Using BTF
|
||||
************
|
||||
|
||||
5.1 bpftool map pretty print
|
||||
============================
|
||||
|
||||
With BTF, the map key/value can be printed based on fields rather than
|
||||
simply raw bytes. This is especially
|
||||
valuable for large structure or if you data structure
|
||||
has bitfields. For example, for the following map,::
|
||||
|
||||
enum A { A1, A2, A3, A4, A5 };
|
||||
typedef enum A ___A;
|
||||
struct tmp_t {
|
||||
char a1:4;
|
||||
int a2:4;
|
||||
int :4;
|
||||
__u32 a3:4;
|
||||
int b;
|
||||
___A b1:4;
|
||||
enum A b2:4;
|
||||
};
|
||||
struct bpf_map_def SEC("maps") tmpmap = {
|
||||
.type = BPF_MAP_TYPE_ARRAY,
|
||||
.key_size = sizeof(__u32),
|
||||
.value_size = sizeof(struct tmp_t),
|
||||
.max_entries = 1,
|
||||
};
|
||||
BPF_ANNOTATE_KV_PAIR(tmpmap, int, struct tmp_t);
|
||||
|
||||
bpftool is able to pretty print like below:
|
||||
::
|
||||
|
||||
[{
|
||||
"key": 0,
|
||||
"value": {
|
||||
"a1": 0x2,
|
||||
"a2": 0x4,
|
||||
"a3": 0x6,
|
||||
"b": 7,
|
||||
"b1": 0x8,
|
||||
"b2": 0xa
|
||||
}
|
||||
}
|
||||
]
|
||||
|
||||
5.2 bpftool prog dump
|
||||
=====================
|
||||
|
||||
The following is an example to show func_info and line_info
|
||||
can help prog dump with better kernel symbol name, function prototype
|
||||
and line information.::
|
||||
|
||||
$ bpftool prog dump jited pinned /sys/fs/bpf/test_btf_haskv
|
||||
[...]
|
||||
int test_long_fname_2(struct dummy_tracepoint_args * arg):
|
||||
bpf_prog_44a040bf25481309_test_long_fname_2:
|
||||
; static int test_long_fname_2(struct dummy_tracepoint_args *arg)
|
||||
0: push %rbp
|
||||
1: mov %rsp,%rbp
|
||||
4: sub $0x30,%rsp
|
||||
b: sub $0x28,%rbp
|
||||
f: mov %rbx,0x0(%rbp)
|
||||
13: mov %r13,0x8(%rbp)
|
||||
17: mov %r14,0x10(%rbp)
|
||||
1b: mov %r15,0x18(%rbp)
|
||||
1f: xor %eax,%eax
|
||||
21: mov %rax,0x20(%rbp)
|
||||
25: xor %esi,%esi
|
||||
; int key = 0;
|
||||
27: mov %esi,-0x4(%rbp)
|
||||
; if (!arg->sock)
|
||||
2a: mov 0x8(%rdi),%rdi
|
||||
; if (!arg->sock)
|
||||
2e: cmp $0x0,%rdi
|
||||
32: je 0x0000000000000070
|
||||
34: mov %rbp,%rsi
|
||||
; counts = bpf_map_lookup_elem(&btf_map, &key);
|
||||
[...]
|
||||
|
||||
5.3 verifier log
|
||||
================
|
||||
|
||||
The following is an example how line_info can help verifier failure debug.::
|
||||
|
||||
/* The code at tools/testing/selftests/bpf/test_xdp_noinline.c
|
||||
* is modified as below.
|
||||
*/
|
||||
data = (void *)(long)xdp->data;
|
||||
data_end = (void *)(long)xdp->data_end;
|
||||
/*
|
||||
if (data + 4 > data_end)
|
||||
return XDP_DROP;
|
||||
*/
|
||||
*(u32 *)data = dst->dst;
|
||||
|
||||
$ bpftool prog load ./test_xdp_noinline.o /sys/fs/bpf/test_xdp_noinline type xdp
|
||||
; data = (void *)(long)xdp->data;
|
||||
224: (79) r2 = *(u64 *)(r10 -112)
|
||||
225: (61) r2 = *(u32 *)(r2 +0)
|
||||
; *(u32 *)data = dst->dst;
|
||||
226: (63) *(u32 *)(r2 +0) = r1
|
||||
invalid access to packet, off=0 size=4, R2(id=0,off=0,r=0)
|
||||
R2 offset is outside of the packet
|
||||
|
||||
6. BTF Generation
|
||||
*****************
|
||||
|
||||
You need latest pahole
|
||||
|
||||
https://git.kernel.org/pub/scm/devel/pahole/pahole.git/
|
||||
|
||||
or llvm (8.0 or later). The pahole acts as a dwarf2btf converter. It doesn't support .BTF.ext
|
||||
and btf BTF_KIND_FUNC type yet. For example,::
|
||||
|
||||
-bash-4.4$ cat t.c
|
||||
struct t {
|
||||
int a:2;
|
||||
int b:3;
|
||||
int c:2;
|
||||
} g;
|
||||
-bash-4.4$ gcc -c -O2 -g t.c
|
||||
-bash-4.4$ pahole -JV t.o
|
||||
File t.o:
|
||||
[1] STRUCT t kind_flag=1 size=4 vlen=3
|
||||
a type_id=2 bitfield_size=2 bits_offset=0
|
||||
b type_id=2 bitfield_size=3 bits_offset=2
|
||||
c type_id=2 bitfield_size=2 bits_offset=5
|
||||
[2] INT int size=4 bit_offset=0 nr_bits=32 encoding=SIGNED
|
||||
|
||||
The llvm is able to generate .BTF and .BTF.ext directly with -g for bpf target only.
|
||||
The assembly code (-S) is able to show the BTF encoding in assembly format.::
|
||||
|
||||
-bash-4.4$ cat t2.c
|
||||
typedef int __int32;
|
||||
struct t2 {
|
||||
int a2;
|
||||
int (*f2)(char q1, __int32 q2, ...);
|
||||
int (*f3)();
|
||||
} g2;
|
||||
int main() { return 0; }
|
||||
int test() { return 0; }
|
||||
-bash-4.4$ clang -c -g -O2 -target bpf t2.c
|
||||
-bash-4.4$ readelf -S t2.o
|
||||
......
|
||||
[ 8] .BTF PROGBITS 0000000000000000 00000247
|
||||
000000000000016e 0000000000000000 0 0 1
|
||||
[ 9] .BTF.ext PROGBITS 0000000000000000 000003b5
|
||||
0000000000000060 0000000000000000 0 0 1
|
||||
[10] .rel.BTF.ext REL 0000000000000000 000007e0
|
||||
0000000000000040 0000000000000010 16 9 8
|
||||
......
|
||||
-bash-4.4$ clang -S -g -O2 -target bpf t2.c
|
||||
-bash-4.4$ cat t2.s
|
||||
......
|
||||
.section .BTF,"",@progbits
|
||||
.short 60319 # 0xeb9f
|
||||
.byte 1
|
||||
.byte 0
|
||||
.long 24
|
||||
.long 0
|
||||
.long 220
|
||||
.long 220
|
||||
.long 122
|
||||
.long 0 # BTF_KIND_FUNC_PROTO(id = 1)
|
||||
.long 218103808 # 0xd000000
|
||||
.long 2
|
||||
.long 83 # BTF_KIND_INT(id = 2)
|
||||
.long 16777216 # 0x1000000
|
||||
.long 4
|
||||
.long 16777248 # 0x1000020
|
||||
......
|
||||
.byte 0 # string offset=0
|
||||
.ascii ".text" # string offset=1
|
||||
.byte 0
|
||||
.ascii "/home/yhs/tmp-pahole/t2.c" # string offset=7
|
||||
.byte 0
|
||||
.ascii "int main() { return 0; }" # string offset=33
|
||||
.byte 0
|
||||
.ascii "int test() { return 0; }" # string offset=58
|
||||
.byte 0
|
||||
.ascii "int" # string offset=83
|
||||
......
|
||||
.section .BTF.ext,"",@progbits
|
||||
.short 60319 # 0xeb9f
|
||||
.byte 1
|
||||
.byte 0
|
||||
.long 24
|
||||
.long 0
|
||||
.long 28
|
||||
.long 28
|
||||
.long 44
|
||||
.long 8 # FuncInfo
|
||||
.long 1 # FuncInfo section string offset=1
|
||||
.long 2
|
||||
.long .Lfunc_begin0
|
||||
.long 3
|
||||
.long .Lfunc_begin1
|
||||
.long 5
|
||||
.long 16 # LineInfo
|
||||
.long 1 # LineInfo section string offset=1
|
||||
.long 2
|
||||
.long .Ltmp0
|
||||
.long 7
|
||||
.long 33
|
||||
.long 7182 # Line 7 Col 14
|
||||
.long .Ltmp3
|
||||
.long 7
|
||||
.long 58
|
||||
.long 8206 # Line 8 Col 14
|
||||
|
||||
7. Testing
|
||||
**********
|
||||
|
||||
Kernel bpf selftest `test_btf.c` provides extensive set of BTF related tests.
|
|
@ -15,6 +15,13 @@ that goes into great technical depth about the BPF Architecture.
|
|||
The primary info for the bpf syscall is available in the `man-pages`_
|
||||
for `bpf(2)`_.
|
||||
|
||||
BPF Type Format (BTF)
|
||||
=====================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
btf
|
||||
|
||||
|
||||
Frequently asked questions (FAQ)
|
||||
|
|
|
@ -865,7 +865,7 @@ Three LSB bits store instruction class which is one of:
|
|||
BPF_STX 0x03 BPF_STX 0x03
|
||||
BPF_ALU 0x04 BPF_ALU 0x04
|
||||
BPF_JMP 0x05 BPF_JMP 0x05
|
||||
BPF_RET 0x06 [ class 6 unused, for future if needed ]
|
||||
BPF_RET 0x06 BPF_JMP32 0x06
|
||||
BPF_MISC 0x07 BPF_ALU64 0x07
|
||||
|
||||
When BPF_CLASS(code) == BPF_ALU or BPF_JMP, 4th bit encodes source operand ...
|
||||
|
@ -902,9 +902,9 @@ If BPF_CLASS(code) == BPF_ALU or BPF_ALU64 [ in eBPF ], BPF_OP(code) is one of:
|
|||
BPF_ARSH 0xc0 /* eBPF only: sign extending shift right */
|
||||
BPF_END 0xd0 /* eBPF only: endianness conversion */
|
||||
|
||||
If BPF_CLASS(code) == BPF_JMP, BPF_OP(code) is one of:
|
||||
If BPF_CLASS(code) == BPF_JMP or BPF_JMP32 [ in eBPF ], BPF_OP(code) is one of:
|
||||
|
||||
BPF_JA 0x00
|
||||
BPF_JA 0x00 /* BPF_JMP only */
|
||||
BPF_JEQ 0x10
|
||||
BPF_JGT 0x20
|
||||
BPF_JGE 0x30
|
||||
|
@ -912,8 +912,8 @@ If BPF_CLASS(code) == BPF_JMP, BPF_OP(code) is one of:
|
|||
BPF_JNE 0x50 /* eBPF only: jump != */
|
||||
BPF_JSGT 0x60 /* eBPF only: signed '>' */
|
||||
BPF_JSGE 0x70 /* eBPF only: signed '>=' */
|
||||
BPF_CALL 0x80 /* eBPF only: function call */
|
||||
BPF_EXIT 0x90 /* eBPF only: function return */
|
||||
BPF_CALL 0x80 /* eBPF BPF_JMP only: function call */
|
||||
BPF_EXIT 0x90 /* eBPF BPF_JMP only: function return */
|
||||
BPF_JLT 0xa0 /* eBPF only: unsigned '<' */
|
||||
BPF_JLE 0xb0 /* eBPF only: unsigned '<=' */
|
||||
BPF_JSLT 0xc0 /* eBPF only: signed '<' */
|
||||
|
@ -936,8 +936,9 @@ Classic BPF wastes the whole BPF_RET class to represent a single 'ret'
|
|||
operation. Classic BPF_RET | BPF_K means copy imm32 into return register
|
||||
and perform function exit. eBPF is modeled to match CPU, so BPF_JMP | BPF_EXIT
|
||||
in eBPF means function exit only. The eBPF program needs to store return
|
||||
value into register R0 before doing a BPF_EXIT. Class 6 in eBPF is currently
|
||||
unused and reserved for future use.
|
||||
value into register R0 before doing a BPF_EXIT. Class 6 in eBPF is used as
|
||||
BPF_JMP32 to mean exactly the same operations as BPF_JMP, but with 32-bit wide
|
||||
operands for the comparisons instead.
|
||||
|
||||
For load and store instructions the 8-bit 'code' field is divided as:
|
||||
|
||||
|
|
|
@ -1083,12 +1083,17 @@ static inline void emit_ldx_r(const s8 dst[], const s8 src,
|
|||
|
||||
/* Arithmatic Operation */
|
||||
static inline void emit_ar_r(const u8 rd, const u8 rt, const u8 rm,
|
||||
const u8 rn, struct jit_ctx *ctx, u8 op) {
|
||||
const u8 rn, struct jit_ctx *ctx, u8 op,
|
||||
bool is_jmp64) {
|
||||
switch (op) {
|
||||
case BPF_JSET:
|
||||
emit(ARM_AND_R(ARM_IP, rt, rn), ctx);
|
||||
emit(ARM_AND_R(ARM_LR, rd, rm), ctx);
|
||||
emit(ARM_ORRS_R(ARM_IP, ARM_LR, ARM_IP), ctx);
|
||||
if (is_jmp64) {
|
||||
emit(ARM_AND_R(ARM_IP, rt, rn), ctx);
|
||||
emit(ARM_AND_R(ARM_LR, rd, rm), ctx);
|
||||
emit(ARM_ORRS_R(ARM_IP, ARM_LR, ARM_IP), ctx);
|
||||
} else {
|
||||
emit(ARM_ANDS_R(ARM_IP, rt, rn), ctx);
|
||||
}
|
||||
break;
|
||||
case BPF_JEQ:
|
||||
case BPF_JNE:
|
||||
|
@ -1096,18 +1101,25 @@ static inline void emit_ar_r(const u8 rd, const u8 rt, const u8 rm,
|
|||
case BPF_JGE:
|
||||
case BPF_JLE:
|
||||
case BPF_JLT:
|
||||
emit(ARM_CMP_R(rd, rm), ctx);
|
||||
_emit(ARM_COND_EQ, ARM_CMP_R(rt, rn), ctx);
|
||||
if (is_jmp64) {
|
||||
emit(ARM_CMP_R(rd, rm), ctx);
|
||||
/* Only compare low halve if high halve are equal. */
|
||||
_emit(ARM_COND_EQ, ARM_CMP_R(rt, rn), ctx);
|
||||
} else {
|
||||
emit(ARM_CMP_R(rt, rn), ctx);
|
||||
}
|
||||
break;
|
||||
case BPF_JSLE:
|
||||
case BPF_JSGT:
|
||||
emit(ARM_CMP_R(rn, rt), ctx);
|
||||
emit(ARM_SBCS_R(ARM_IP, rm, rd), ctx);
|
||||
if (is_jmp64)
|
||||
emit(ARM_SBCS_R(ARM_IP, rm, rd), ctx);
|
||||
break;
|
||||
case BPF_JSLT:
|
||||
case BPF_JSGE:
|
||||
emit(ARM_CMP_R(rt, rn), ctx);
|
||||
emit(ARM_SBCS_R(ARM_IP, rd, rm), ctx);
|
||||
if (is_jmp64)
|
||||
emit(ARM_SBCS_R(ARM_IP, rd, rm), ctx);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
@ -1615,6 +1627,17 @@ exit:
|
|||
case BPF_JMP | BPF_JLT | BPF_X:
|
||||
case BPF_JMP | BPF_JSLT | BPF_X:
|
||||
case BPF_JMP | BPF_JSLE | BPF_X:
|
||||
case BPF_JMP32 | BPF_JEQ | BPF_X:
|
||||
case BPF_JMP32 | BPF_JGT | BPF_X:
|
||||
case BPF_JMP32 | BPF_JGE | BPF_X:
|
||||
case BPF_JMP32 | BPF_JNE | BPF_X:
|
||||
case BPF_JMP32 | BPF_JSGT | BPF_X:
|
||||
case BPF_JMP32 | BPF_JSGE | BPF_X:
|
||||
case BPF_JMP32 | BPF_JSET | BPF_X:
|
||||
case BPF_JMP32 | BPF_JLE | BPF_X:
|
||||
case BPF_JMP32 | BPF_JLT | BPF_X:
|
||||
case BPF_JMP32 | BPF_JSLT | BPF_X:
|
||||
case BPF_JMP32 | BPF_JSLE | BPF_X:
|
||||
/* Setup source registers */
|
||||
rm = arm_bpf_get_reg32(src_hi, tmp2[0], ctx);
|
||||
rn = arm_bpf_get_reg32(src_lo, tmp2[1], ctx);
|
||||
|
@ -1641,6 +1664,17 @@ exit:
|
|||
case BPF_JMP | BPF_JLE | BPF_K:
|
||||
case BPF_JMP | BPF_JSLT | BPF_K:
|
||||
case BPF_JMP | BPF_JSLE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JEQ | BPF_K:
|
||||
case BPF_JMP32 | BPF_JGT | BPF_K:
|
||||
case BPF_JMP32 | BPF_JGE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JNE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSGT | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSGE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSET | BPF_K:
|
||||
case BPF_JMP32 | BPF_JLT | BPF_K:
|
||||
case BPF_JMP32 | BPF_JLE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSLT | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSLE | BPF_K:
|
||||
if (off == 0)
|
||||
break;
|
||||
rm = tmp2[0];
|
||||
|
@ -1652,7 +1686,8 @@ go_jmp:
|
|||
rd = arm_bpf_get_reg64(dst, tmp, ctx);
|
||||
|
||||
/* Check for the condition */
|
||||
emit_ar_r(rd[0], rd[1], rm, rn, ctx, BPF_OP(code));
|
||||
emit_ar_r(rd[0], rd[1], rm, rn, ctx, BPF_OP(code),
|
||||
BPF_CLASS(code) == BPF_JMP);
|
||||
|
||||
/* Setup JUMP instruction */
|
||||
jmp_offset = bpf2a32_offset(i+off, i, ctx);
|
||||
|
|
|
@ -62,6 +62,7 @@
|
|||
#define ARM_INST_ADDS_I 0x02900000
|
||||
|
||||
#define ARM_INST_AND_R 0x00000000
|
||||
#define ARM_INST_ANDS_R 0x00100000
|
||||
#define ARM_INST_AND_I 0x02000000
|
||||
|
||||
#define ARM_INST_BIC_R 0x01c00000
|
||||
|
@ -172,6 +173,7 @@
|
|||
#define ARM_ADC_I(rd, rn, imm) _AL3_I(ARM_INST_ADC, rd, rn, imm)
|
||||
|
||||
#define ARM_AND_R(rd, rn, rm) _AL3_R(ARM_INST_AND, rd, rn, rm)
|
||||
#define ARM_ANDS_R(rd, rn, rm) _AL3_R(ARM_INST_ANDS, rd, rn, rm)
|
||||
#define ARM_AND_I(rd, rn, imm) _AL3_I(ARM_INST_AND, rd, rn, imm)
|
||||
|
||||
#define ARM_BIC_R(rd, rn, rm) _AL3_R(ARM_INST_BIC, rd, rn, rm)
|
||||
|
|
|
@ -362,7 +362,8 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
|
|||
const s16 off = insn->off;
|
||||
const s32 imm = insn->imm;
|
||||
const int i = insn - ctx->prog->insnsi;
|
||||
const bool is64 = BPF_CLASS(code) == BPF_ALU64;
|
||||
const bool is64 = BPF_CLASS(code) == BPF_ALU64 ||
|
||||
BPF_CLASS(code) == BPF_JMP;
|
||||
const bool isdw = BPF_SIZE(code) == BPF_DW;
|
||||
u8 jmp_cond;
|
||||
s32 jmp_offset;
|
||||
|
@ -559,7 +560,17 @@ emit_bswap_uxt:
|
|||
case BPF_JMP | BPF_JSLT | BPF_X:
|
||||
case BPF_JMP | BPF_JSGE | BPF_X:
|
||||
case BPF_JMP | BPF_JSLE | BPF_X:
|
||||
emit(A64_CMP(1, dst, src), ctx);
|
||||
case BPF_JMP32 | BPF_JEQ | BPF_X:
|
||||
case BPF_JMP32 | BPF_JGT | BPF_X:
|
||||
case BPF_JMP32 | BPF_JLT | BPF_X:
|
||||
case BPF_JMP32 | BPF_JGE | BPF_X:
|
||||
case BPF_JMP32 | BPF_JLE | BPF_X:
|
||||
case BPF_JMP32 | BPF_JNE | BPF_X:
|
||||
case BPF_JMP32 | BPF_JSGT | BPF_X:
|
||||
case BPF_JMP32 | BPF_JSLT | BPF_X:
|
||||
case BPF_JMP32 | BPF_JSGE | BPF_X:
|
||||
case BPF_JMP32 | BPF_JSLE | BPF_X:
|
||||
emit(A64_CMP(is64, dst, src), ctx);
|
||||
emit_cond_jmp:
|
||||
jmp_offset = bpf2a64_offset(i + off, i, ctx);
|
||||
check_imm19(jmp_offset);
|
||||
|
@ -601,7 +612,8 @@ emit_cond_jmp:
|
|||
emit(A64_B_(jmp_cond, jmp_offset), ctx);
|
||||
break;
|
||||
case BPF_JMP | BPF_JSET | BPF_X:
|
||||
emit(A64_TST(1, dst, src), ctx);
|
||||
case BPF_JMP32 | BPF_JSET | BPF_X:
|
||||
emit(A64_TST(is64, dst, src), ctx);
|
||||
goto emit_cond_jmp;
|
||||
/* IF (dst COND imm) JUMP off */
|
||||
case BPF_JMP | BPF_JEQ | BPF_K:
|
||||
|
@ -614,12 +626,23 @@ emit_cond_jmp:
|
|||
case BPF_JMP | BPF_JSLT | BPF_K:
|
||||
case BPF_JMP | BPF_JSGE | BPF_K:
|
||||
case BPF_JMP | BPF_JSLE | BPF_K:
|
||||
emit_a64_mov_i(1, tmp, imm, ctx);
|
||||
emit(A64_CMP(1, dst, tmp), ctx);
|
||||
case BPF_JMP32 | BPF_JEQ | BPF_K:
|
||||
case BPF_JMP32 | BPF_JGT | BPF_K:
|
||||
case BPF_JMP32 | BPF_JLT | BPF_K:
|
||||
case BPF_JMP32 | BPF_JGE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JLE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JNE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSGT | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSLT | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSGE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSLE | BPF_K:
|
||||
emit_a64_mov_i(is64, tmp, imm, ctx);
|
||||
emit(A64_CMP(is64, dst, tmp), ctx);
|
||||
goto emit_cond_jmp;
|
||||
case BPF_JMP | BPF_JSET | BPF_K:
|
||||
emit_a64_mov_i(1, tmp, imm, ctx);
|
||||
emit(A64_TST(1, dst, tmp), ctx);
|
||||
case BPF_JMP32 | BPF_JSET | BPF_K:
|
||||
emit_a64_mov_i(is64, tmp, imm, ctx);
|
||||
emit(A64_TST(is64, dst, tmp), ctx);
|
||||
goto emit_cond_jmp;
|
||||
/* function call */
|
||||
case BPF_JMP | BPF_CALL:
|
||||
|
|
|
@ -337,6 +337,7 @@
|
|||
#define PPC_INST_DIVWU 0x7c000396
|
||||
#define PPC_INST_DIVD 0x7c0003d2
|
||||
#define PPC_INST_RLWINM 0x54000000
|
||||
#define PPC_INST_RLWINM_DOT 0x54000001
|
||||
#define PPC_INST_RLWIMI 0x50000000
|
||||
#define PPC_INST_RLDICL 0x78000000
|
||||
#define PPC_INST_RLDICR 0x78000004
|
||||
|
|
|
@ -165,6 +165,10 @@
|
|||
#define PPC_RLWINM(d, a, i, mb, me) EMIT(PPC_INST_RLWINM | ___PPC_RA(d) | \
|
||||
___PPC_RS(a) | __PPC_SH(i) | \
|
||||
__PPC_MB(mb) | __PPC_ME(me))
|
||||
#define PPC_RLWINM_DOT(d, a, i, mb, me) EMIT(PPC_INST_RLWINM_DOT | \
|
||||
___PPC_RA(d) | ___PPC_RS(a) | \
|
||||
__PPC_SH(i) | __PPC_MB(mb) | \
|
||||
__PPC_ME(me))
|
||||
#define PPC_RLWIMI(d, a, i, mb, me) EMIT(PPC_INST_RLWIMI | ___PPC_RA(d) | \
|
||||
___PPC_RS(a) | __PPC_SH(i) | \
|
||||
__PPC_MB(mb) | __PPC_ME(me))
|
||||
|
|
|
@ -768,36 +768,58 @@ emit_clear:
|
|||
case BPF_JMP | BPF_JGT | BPF_X:
|
||||
case BPF_JMP | BPF_JSGT | BPF_K:
|
||||
case BPF_JMP | BPF_JSGT | BPF_X:
|
||||
case BPF_JMP32 | BPF_JGT | BPF_K:
|
||||
case BPF_JMP32 | BPF_JGT | BPF_X:
|
||||
case BPF_JMP32 | BPF_JSGT | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSGT | BPF_X:
|
||||
true_cond = COND_GT;
|
||||
goto cond_branch;
|
||||
case BPF_JMP | BPF_JLT | BPF_K:
|
||||
case BPF_JMP | BPF_JLT | BPF_X:
|
||||
case BPF_JMP | BPF_JSLT | BPF_K:
|
||||
case BPF_JMP | BPF_JSLT | BPF_X:
|
||||
case BPF_JMP32 | BPF_JLT | BPF_K:
|
||||
case BPF_JMP32 | BPF_JLT | BPF_X:
|
||||
case BPF_JMP32 | BPF_JSLT | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSLT | BPF_X:
|
||||
true_cond = COND_LT;
|
||||
goto cond_branch;
|
||||
case BPF_JMP | BPF_JGE | BPF_K:
|
||||
case BPF_JMP | BPF_JGE | BPF_X:
|
||||
case BPF_JMP | BPF_JSGE | BPF_K:
|
||||
case BPF_JMP | BPF_JSGE | BPF_X:
|
||||
case BPF_JMP32 | BPF_JGE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JGE | BPF_X:
|
||||
case BPF_JMP32 | BPF_JSGE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSGE | BPF_X:
|
||||
true_cond = COND_GE;
|
||||
goto cond_branch;
|
||||
case BPF_JMP | BPF_JLE | BPF_K:
|
||||
case BPF_JMP | BPF_JLE | BPF_X:
|
||||
case BPF_JMP | BPF_JSLE | BPF_K:
|
||||
case BPF_JMP | BPF_JSLE | BPF_X:
|
||||
case BPF_JMP32 | BPF_JLE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JLE | BPF_X:
|
||||
case BPF_JMP32 | BPF_JSLE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSLE | BPF_X:
|
||||
true_cond = COND_LE;
|
||||
goto cond_branch;
|
||||
case BPF_JMP | BPF_JEQ | BPF_K:
|
||||
case BPF_JMP | BPF_JEQ | BPF_X:
|
||||
case BPF_JMP32 | BPF_JEQ | BPF_K:
|
||||
case BPF_JMP32 | BPF_JEQ | BPF_X:
|
||||
true_cond = COND_EQ;
|
||||
goto cond_branch;
|
||||
case BPF_JMP | BPF_JNE | BPF_K:
|
||||
case BPF_JMP | BPF_JNE | BPF_X:
|
||||
case BPF_JMP32 | BPF_JNE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JNE | BPF_X:
|
||||
true_cond = COND_NE;
|
||||
goto cond_branch;
|
||||
case BPF_JMP | BPF_JSET | BPF_K:
|
||||
case BPF_JMP | BPF_JSET | BPF_X:
|
||||
case BPF_JMP32 | BPF_JSET | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSET | BPF_X:
|
||||
true_cond = COND_NE;
|
||||
/* Fall through */
|
||||
|
||||
|
@ -809,18 +831,44 @@ cond_branch:
|
|||
case BPF_JMP | BPF_JLE | BPF_X:
|
||||
case BPF_JMP | BPF_JEQ | BPF_X:
|
||||
case BPF_JMP | BPF_JNE | BPF_X:
|
||||
case BPF_JMP32 | BPF_JGT | BPF_X:
|
||||
case BPF_JMP32 | BPF_JLT | BPF_X:
|
||||
case BPF_JMP32 | BPF_JGE | BPF_X:
|
||||
case BPF_JMP32 | BPF_JLE | BPF_X:
|
||||
case BPF_JMP32 | BPF_JEQ | BPF_X:
|
||||
case BPF_JMP32 | BPF_JNE | BPF_X:
|
||||
/* unsigned comparison */
|
||||
PPC_CMPLD(dst_reg, src_reg);
|
||||
if (BPF_CLASS(code) == BPF_JMP32)
|
||||
PPC_CMPLW(dst_reg, src_reg);
|
||||
else
|
||||
PPC_CMPLD(dst_reg, src_reg);
|
||||
break;
|
||||
case BPF_JMP | BPF_JSGT | BPF_X:
|
||||
case BPF_JMP | BPF_JSLT | BPF_X:
|
||||
case BPF_JMP | BPF_JSGE | BPF_X:
|
||||
case BPF_JMP | BPF_JSLE | BPF_X:
|
||||
case BPF_JMP32 | BPF_JSGT | BPF_X:
|
||||
case BPF_JMP32 | BPF_JSLT | BPF_X:
|
||||
case BPF_JMP32 | BPF_JSGE | BPF_X:
|
||||
case BPF_JMP32 | BPF_JSLE | BPF_X:
|
||||
/* signed comparison */
|
||||
PPC_CMPD(dst_reg, src_reg);
|
||||
if (BPF_CLASS(code) == BPF_JMP32)
|
||||
PPC_CMPW(dst_reg, src_reg);
|
||||
else
|
||||
PPC_CMPD(dst_reg, src_reg);
|
||||
break;
|
||||
case BPF_JMP | BPF_JSET | BPF_X:
|
||||
PPC_AND_DOT(b2p[TMP_REG_1], dst_reg, src_reg);
|
||||
case BPF_JMP32 | BPF_JSET | BPF_X:
|
||||
if (BPF_CLASS(code) == BPF_JMP) {
|
||||
PPC_AND_DOT(b2p[TMP_REG_1], dst_reg,
|
||||
src_reg);
|
||||
} else {
|
||||
int tmp_reg = b2p[TMP_REG_1];
|
||||
|
||||
PPC_AND(tmp_reg, dst_reg, src_reg);
|
||||
PPC_RLWINM_DOT(tmp_reg, tmp_reg, 0, 0,
|
||||
31);
|
||||
}
|
||||
break;
|
||||
case BPF_JMP | BPF_JNE | BPF_K:
|
||||
case BPF_JMP | BPF_JEQ | BPF_K:
|
||||
|
@ -828,43 +876,87 @@ cond_branch:
|
|||
case BPF_JMP | BPF_JLT | BPF_K:
|
||||
case BPF_JMP | BPF_JGE | BPF_K:
|
||||
case BPF_JMP | BPF_JLE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JNE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JEQ | BPF_K:
|
||||
case BPF_JMP32 | BPF_JGT | BPF_K:
|
||||
case BPF_JMP32 | BPF_JLT | BPF_K:
|
||||
case BPF_JMP32 | BPF_JGE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JLE | BPF_K:
|
||||
{
|
||||
bool is_jmp32 = BPF_CLASS(code) == BPF_JMP32;
|
||||
|
||||
/*
|
||||
* Need sign-extended load, so only positive
|
||||
* values can be used as imm in cmpldi
|
||||
*/
|
||||
if (imm >= 0 && imm < 32768)
|
||||
PPC_CMPLDI(dst_reg, imm);
|
||||
else {
|
||||
if (imm >= 0 && imm < 32768) {
|
||||
if (is_jmp32)
|
||||
PPC_CMPLWI(dst_reg, imm);
|
||||
else
|
||||
PPC_CMPLDI(dst_reg, imm);
|
||||
} else {
|
||||
/* sign-extending load */
|
||||
PPC_LI32(b2p[TMP_REG_1], imm);
|
||||
/* ... but unsigned comparison */
|
||||
PPC_CMPLD(dst_reg, b2p[TMP_REG_1]);
|
||||
if (is_jmp32)
|
||||
PPC_CMPLW(dst_reg,
|
||||
b2p[TMP_REG_1]);
|
||||
else
|
||||
PPC_CMPLD(dst_reg,
|
||||
b2p[TMP_REG_1]);
|
||||
}
|
||||
break;
|
||||
}
|
||||
case BPF_JMP | BPF_JSGT | BPF_K:
|
||||
case BPF_JMP | BPF_JSLT | BPF_K:
|
||||
case BPF_JMP | BPF_JSGE | BPF_K:
|
||||
case BPF_JMP | BPF_JSLE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSGT | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSLT | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSGE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSLE | BPF_K:
|
||||
{
|
||||
bool is_jmp32 = BPF_CLASS(code) == BPF_JMP32;
|
||||
|
||||
/*
|
||||
* signed comparison, so any 16-bit value
|
||||
* can be used in cmpdi
|
||||
*/
|
||||
if (imm >= -32768 && imm < 32768)
|
||||
PPC_CMPDI(dst_reg, imm);
|
||||
else {
|
||||
if (imm >= -32768 && imm < 32768) {
|
||||
if (is_jmp32)
|
||||
PPC_CMPWI(dst_reg, imm);
|
||||
else
|
||||
PPC_CMPDI(dst_reg, imm);
|
||||
} else {
|
||||
PPC_LI32(b2p[TMP_REG_1], imm);
|
||||
PPC_CMPD(dst_reg, b2p[TMP_REG_1]);
|
||||
if (is_jmp32)
|
||||
PPC_CMPW(dst_reg,
|
||||
b2p[TMP_REG_1]);
|
||||
else
|
||||
PPC_CMPD(dst_reg,
|
||||
b2p[TMP_REG_1]);
|
||||
}
|
||||
break;
|
||||
}
|
||||
case BPF_JMP | BPF_JSET | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSET | BPF_K:
|
||||
/* andi does not sign-extend the immediate */
|
||||
if (imm >= 0 && imm < 32768)
|
||||
/* PPC_ANDI is _only/always_ dot-form */
|
||||
PPC_ANDI(b2p[TMP_REG_1], dst_reg, imm);
|
||||
else {
|
||||
PPC_LI32(b2p[TMP_REG_1], imm);
|
||||
PPC_AND_DOT(b2p[TMP_REG_1], dst_reg,
|
||||
b2p[TMP_REG_1]);
|
||||
int tmp_reg = b2p[TMP_REG_1];
|
||||
|
||||
PPC_LI32(tmp_reg, imm);
|
||||
if (BPF_CLASS(code) == BPF_JMP) {
|
||||
PPC_AND_DOT(tmp_reg, dst_reg,
|
||||
tmp_reg);
|
||||
} else {
|
||||
PPC_AND(tmp_reg, dst_reg,
|
||||
tmp_reg);
|
||||
PPC_RLWINM_DOT(tmp_reg, tmp_reg,
|
||||
0, 0, 31);
|
||||
}
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
|
|
@ -1110,103 +1110,141 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, int i
|
|||
mask = 0xf000; /* j */
|
||||
goto branch_oc;
|
||||
case BPF_JMP | BPF_JSGT | BPF_K: /* ((s64) dst > (s64) imm) */
|
||||
case BPF_JMP32 | BPF_JSGT | BPF_K: /* ((s32) dst > (s32) imm) */
|
||||
mask = 0x2000; /* jh */
|
||||
goto branch_ks;
|
||||
case BPF_JMP | BPF_JSLT | BPF_K: /* ((s64) dst < (s64) imm) */
|
||||
case BPF_JMP32 | BPF_JSLT | BPF_K: /* ((s32) dst < (s32) imm) */
|
||||
mask = 0x4000; /* jl */
|
||||
goto branch_ks;
|
||||
case BPF_JMP | BPF_JSGE | BPF_K: /* ((s64) dst >= (s64) imm) */
|
||||
case BPF_JMP32 | BPF_JSGE | BPF_K: /* ((s32) dst >= (s32) imm) */
|
||||
mask = 0xa000; /* jhe */
|
||||
goto branch_ks;
|
||||
case BPF_JMP | BPF_JSLE | BPF_K: /* ((s64) dst <= (s64) imm) */
|
||||
case BPF_JMP32 | BPF_JSLE | BPF_K: /* ((s32) dst <= (s32) imm) */
|
||||
mask = 0xc000; /* jle */
|
||||
goto branch_ks;
|
||||
case BPF_JMP | BPF_JGT | BPF_K: /* (dst_reg > imm) */
|
||||
case BPF_JMP32 | BPF_JGT | BPF_K: /* ((u32) dst_reg > (u32) imm) */
|
||||
mask = 0x2000; /* jh */
|
||||
goto branch_ku;
|
||||
case BPF_JMP | BPF_JLT | BPF_K: /* (dst_reg < imm) */
|
||||
case BPF_JMP32 | BPF_JLT | BPF_K: /* ((u32) dst_reg < (u32) imm) */
|
||||
mask = 0x4000; /* jl */
|
||||
goto branch_ku;
|
||||
case BPF_JMP | BPF_JGE | BPF_K: /* (dst_reg >= imm) */
|
||||
case BPF_JMP32 | BPF_JGE | BPF_K: /* ((u32) dst_reg >= (u32) imm) */
|
||||
mask = 0xa000; /* jhe */
|
||||
goto branch_ku;
|
||||
case BPF_JMP | BPF_JLE | BPF_K: /* (dst_reg <= imm) */
|
||||
case BPF_JMP32 | BPF_JLE | BPF_K: /* ((u32) dst_reg <= (u32) imm) */
|
||||
mask = 0xc000; /* jle */
|
||||
goto branch_ku;
|
||||
case BPF_JMP | BPF_JNE | BPF_K: /* (dst_reg != imm) */
|
||||
case BPF_JMP32 | BPF_JNE | BPF_K: /* ((u32) dst_reg != (u32) imm) */
|
||||
mask = 0x7000; /* jne */
|
||||
goto branch_ku;
|
||||
case BPF_JMP | BPF_JEQ | BPF_K: /* (dst_reg == imm) */
|
||||
case BPF_JMP32 | BPF_JEQ | BPF_K: /* ((u32) dst_reg == (u32) imm) */
|
||||
mask = 0x8000; /* je */
|
||||
goto branch_ku;
|
||||
case BPF_JMP | BPF_JSET | BPF_K: /* (dst_reg & imm) */
|
||||
case BPF_JMP32 | BPF_JSET | BPF_K: /* ((u32) dst_reg & (u32) imm) */
|
||||
mask = 0x7000; /* jnz */
|
||||
/* lgfi %w1,imm (load sign extend imm) */
|
||||
EMIT6_IMM(0xc0010000, REG_W1, imm);
|
||||
/* ngr %w1,%dst */
|
||||
EMIT4(0xb9800000, REG_W1, dst_reg);
|
||||
if (BPF_CLASS(insn->code) == BPF_JMP32) {
|
||||
/* llilf %w1,imm (load zero extend imm) */
|
||||
EMIT6_IMM(0xc0010000, REG_W1, imm);
|
||||
/* nr %w1,%dst */
|
||||
EMIT2(0x1400, REG_W1, dst_reg);
|
||||
} else {
|
||||
/* lgfi %w1,imm (load sign extend imm) */
|
||||
EMIT6_IMM(0xc0010000, REG_W1, imm);
|
||||
/* ngr %w1,%dst */
|
||||
EMIT4(0xb9800000, REG_W1, dst_reg);
|
||||
}
|
||||
goto branch_oc;
|
||||
|
||||
case BPF_JMP | BPF_JSGT | BPF_X: /* ((s64) dst > (s64) src) */
|
||||
case BPF_JMP32 | BPF_JSGT | BPF_X: /* ((s32) dst > (s32) src) */
|
||||
mask = 0x2000; /* jh */
|
||||
goto branch_xs;
|
||||
case BPF_JMP | BPF_JSLT | BPF_X: /* ((s64) dst < (s64) src) */
|
||||
case BPF_JMP32 | BPF_JSLT | BPF_X: /* ((s32) dst < (s32) src) */
|
||||
mask = 0x4000; /* jl */
|
||||
goto branch_xs;
|
||||
case BPF_JMP | BPF_JSGE | BPF_X: /* ((s64) dst >= (s64) src) */
|
||||
case BPF_JMP32 | BPF_JSGE | BPF_X: /* ((s32) dst >= (s32) src) */
|
||||
mask = 0xa000; /* jhe */
|
||||
goto branch_xs;
|
||||
case BPF_JMP | BPF_JSLE | BPF_X: /* ((s64) dst <= (s64) src) */
|
||||
case BPF_JMP32 | BPF_JSLE | BPF_X: /* ((s32) dst <= (s32) src) */
|
||||
mask = 0xc000; /* jle */
|
||||
goto branch_xs;
|
||||
case BPF_JMP | BPF_JGT | BPF_X: /* (dst > src) */
|
||||
case BPF_JMP32 | BPF_JGT | BPF_X: /* ((u32) dst > (u32) src) */
|
||||
mask = 0x2000; /* jh */
|
||||
goto branch_xu;
|
||||
case BPF_JMP | BPF_JLT | BPF_X: /* (dst < src) */
|
||||
case BPF_JMP32 | BPF_JLT | BPF_X: /* ((u32) dst < (u32) src) */
|
||||
mask = 0x4000; /* jl */
|
||||
goto branch_xu;
|
||||
case BPF_JMP | BPF_JGE | BPF_X: /* (dst >= src) */
|
||||
case BPF_JMP32 | BPF_JGE | BPF_X: /* ((u32) dst >= (u32) src) */
|
||||
mask = 0xa000; /* jhe */
|
||||
goto branch_xu;
|
||||
case BPF_JMP | BPF_JLE | BPF_X: /* (dst <= src) */
|
||||
case BPF_JMP32 | BPF_JLE | BPF_X: /* ((u32) dst <= (u32) src) */
|
||||
mask = 0xc000; /* jle */
|
||||
goto branch_xu;
|
||||
case BPF_JMP | BPF_JNE | BPF_X: /* (dst != src) */
|
||||
case BPF_JMP32 | BPF_JNE | BPF_X: /* ((u32) dst != (u32) src) */
|
||||
mask = 0x7000; /* jne */
|
||||
goto branch_xu;
|
||||
case BPF_JMP | BPF_JEQ | BPF_X: /* (dst == src) */
|
||||
case BPF_JMP32 | BPF_JEQ | BPF_X: /* ((u32) dst == (u32) src) */
|
||||
mask = 0x8000; /* je */
|
||||
goto branch_xu;
|
||||
case BPF_JMP | BPF_JSET | BPF_X: /* (dst & src) */
|
||||
case BPF_JMP32 | BPF_JSET | BPF_X: /* ((u32) dst & (u32) src) */
|
||||
{
|
||||
bool is_jmp32 = BPF_CLASS(insn->code) == BPF_JMP32;
|
||||
|
||||
mask = 0x7000; /* jnz */
|
||||
/* ngrk %w1,%dst,%src */
|
||||
EMIT4_RRF(0xb9e40000, REG_W1, dst_reg, src_reg);
|
||||
/* nrk or ngrk %w1,%dst,%src */
|
||||
EMIT4_RRF((is_jmp32 ? 0xb9f40000 : 0xb9e40000),
|
||||
REG_W1, dst_reg, src_reg);
|
||||
goto branch_oc;
|
||||
branch_ks:
|
||||
/* lgfi %w1,imm (load sign extend imm) */
|
||||
EMIT6_IMM(0xc0010000, REG_W1, imm);
|
||||
/* cgrj %dst,%w1,mask,off */
|
||||
EMIT6_PCREL(0xec000000, 0x0064, dst_reg, REG_W1, i, off, mask);
|
||||
/* crj or cgrj %dst,%w1,mask,off */
|
||||
EMIT6_PCREL(0xec000000, (is_jmp32 ? 0x0076 : 0x0064),
|
||||
dst_reg, REG_W1, i, off, mask);
|
||||
break;
|
||||
branch_ku:
|
||||
/* lgfi %w1,imm (load sign extend imm) */
|
||||
EMIT6_IMM(0xc0010000, REG_W1, imm);
|
||||
/* clgrj %dst,%w1,mask,off */
|
||||
EMIT6_PCREL(0xec000000, 0x0065, dst_reg, REG_W1, i, off, mask);
|
||||
/* clrj or clgrj %dst,%w1,mask,off */
|
||||
EMIT6_PCREL(0xec000000, (is_jmp32 ? 0x0077 : 0x0065),
|
||||
dst_reg, REG_W1, i, off, mask);
|
||||
break;
|
||||
branch_xs:
|
||||
/* cgrj %dst,%src,mask,off */
|
||||
EMIT6_PCREL(0xec000000, 0x0064, dst_reg, src_reg, i, off, mask);
|
||||
/* crj or cgrj %dst,%src,mask,off */
|
||||
EMIT6_PCREL(0xec000000, (is_jmp32 ? 0x0076 : 0x0064),
|
||||
dst_reg, src_reg, i, off, mask);
|
||||
break;
|
||||
branch_xu:
|
||||
/* clgrj %dst,%src,mask,off */
|
||||
EMIT6_PCREL(0xec000000, 0x0065, dst_reg, src_reg, i, off, mask);
|
||||
/* clrj or clgrj %dst,%src,mask,off */
|
||||
EMIT6_PCREL(0xec000000, (is_jmp32 ? 0x0077 : 0x0065),
|
||||
dst_reg, src_reg, i, off, mask);
|
||||
break;
|
||||
branch_oc:
|
||||
/* brc mask,jmp_off (branch instruction needs 4 bytes) */
|
||||
jmp_off = addrs[i + off + 1] - (addrs[i + 1] - 4);
|
||||
EMIT4_PCREL(0xa7040000 | mask << 8, jmp_off);
|
||||
break;
|
||||
}
|
||||
default: /* too complex, give up */
|
||||
pr_err("Unknown opcode %02x\n", insn->code);
|
||||
return -1;
|
||||
|
|
|
@ -881,20 +881,41 @@ xadd: if (is_imm8(insn->off))
|
|||
case BPF_JMP | BPF_JSLT | BPF_X:
|
||||
case BPF_JMP | BPF_JSGE | BPF_X:
|
||||
case BPF_JMP | BPF_JSLE | BPF_X:
|
||||
case BPF_JMP32 | BPF_JEQ | BPF_X:
|
||||
case BPF_JMP32 | BPF_JNE | BPF_X:
|
||||
case BPF_JMP32 | BPF_JGT | BPF_X:
|
||||
case BPF_JMP32 | BPF_JLT | BPF_X:
|
||||
case BPF_JMP32 | BPF_JGE | BPF_X:
|
||||
case BPF_JMP32 | BPF_JLE | BPF_X:
|
||||
case BPF_JMP32 | BPF_JSGT | BPF_X:
|
||||
case BPF_JMP32 | BPF_JSLT | BPF_X:
|
||||
case BPF_JMP32 | BPF_JSGE | BPF_X:
|
||||
case BPF_JMP32 | BPF_JSLE | BPF_X:
|
||||
/* cmp dst_reg, src_reg */
|
||||
EMIT3(add_2mod(0x48, dst_reg, src_reg), 0x39,
|
||||
add_2reg(0xC0, dst_reg, src_reg));
|
||||
if (BPF_CLASS(insn->code) == BPF_JMP)
|
||||
EMIT1(add_2mod(0x48, dst_reg, src_reg));
|
||||
else if (is_ereg(dst_reg) || is_ereg(src_reg))
|
||||
EMIT1(add_2mod(0x40, dst_reg, src_reg));
|
||||
EMIT2(0x39, add_2reg(0xC0, dst_reg, src_reg));
|
||||
goto emit_cond_jmp;
|
||||
|
||||
case BPF_JMP | BPF_JSET | BPF_X:
|
||||
case BPF_JMP32 | BPF_JSET | BPF_X:
|
||||
/* test dst_reg, src_reg */
|
||||
EMIT3(add_2mod(0x48, dst_reg, src_reg), 0x85,
|
||||
add_2reg(0xC0, dst_reg, src_reg));
|
||||
if (BPF_CLASS(insn->code) == BPF_JMP)
|
||||
EMIT1(add_2mod(0x48, dst_reg, src_reg));
|
||||
else if (is_ereg(dst_reg) || is_ereg(src_reg))
|
||||
EMIT1(add_2mod(0x40, dst_reg, src_reg));
|
||||
EMIT2(0x85, add_2reg(0xC0, dst_reg, src_reg));
|
||||
goto emit_cond_jmp;
|
||||
|
||||
case BPF_JMP | BPF_JSET | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSET | BPF_K:
|
||||
/* test dst_reg, imm32 */
|
||||
EMIT1(add_1mod(0x48, dst_reg));
|
||||
if (BPF_CLASS(insn->code) == BPF_JMP)
|
||||
EMIT1(add_1mod(0x48, dst_reg));
|
||||
else if (is_ereg(dst_reg))
|
||||
EMIT1(add_1mod(0x40, dst_reg));
|
||||
EMIT2_off32(0xF7, add_1reg(0xC0, dst_reg), imm32);
|
||||
goto emit_cond_jmp;
|
||||
|
||||
|
@ -908,8 +929,21 @@ xadd: if (is_imm8(insn->off))
|
|||
case BPF_JMP | BPF_JSLT | BPF_K:
|
||||
case BPF_JMP | BPF_JSGE | BPF_K:
|
||||
case BPF_JMP | BPF_JSLE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JEQ | BPF_K:
|
||||
case BPF_JMP32 | BPF_JNE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JGT | BPF_K:
|
||||
case BPF_JMP32 | BPF_JLT | BPF_K:
|
||||
case BPF_JMP32 | BPF_JGE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JLE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSGT | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSLT | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSGE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSLE | BPF_K:
|
||||
/* cmp dst_reg, imm8/32 */
|
||||
EMIT1(add_1mod(0x48, dst_reg));
|
||||
if (BPF_CLASS(insn->code) == BPF_JMP)
|
||||
EMIT1(add_1mod(0x48, dst_reg));
|
||||
else if (is_ereg(dst_reg))
|
||||
EMIT1(add_1mod(0x40, dst_reg));
|
||||
|
||||
if (is_imm8(imm32))
|
||||
EMIT3(0x83, add_1reg(0xF8, dst_reg), imm32);
|
||||
|
|
|
@ -2072,7 +2072,18 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
|
|||
case BPF_JMP | BPF_JSGT | BPF_X:
|
||||
case BPF_JMP | BPF_JSLE | BPF_X:
|
||||
case BPF_JMP | BPF_JSLT | BPF_X:
|
||||
case BPF_JMP | BPF_JSGE | BPF_X: {
|
||||
case BPF_JMP | BPF_JSGE | BPF_X:
|
||||
case BPF_JMP32 | BPF_JEQ | BPF_X:
|
||||
case BPF_JMP32 | BPF_JNE | BPF_X:
|
||||
case BPF_JMP32 | BPF_JGT | BPF_X:
|
||||
case BPF_JMP32 | BPF_JLT | BPF_X:
|
||||
case BPF_JMP32 | BPF_JGE | BPF_X:
|
||||
case BPF_JMP32 | BPF_JLE | BPF_X:
|
||||
case BPF_JMP32 | BPF_JSGT | BPF_X:
|
||||
case BPF_JMP32 | BPF_JSLE | BPF_X:
|
||||
case BPF_JMP32 | BPF_JSLT | BPF_X:
|
||||
case BPF_JMP32 | BPF_JSGE | BPF_X: {
|
||||
bool is_jmp64 = BPF_CLASS(insn->code) == BPF_JMP;
|
||||
u8 dreg_lo = dstk ? IA32_EAX : dst_lo;
|
||||
u8 dreg_hi = dstk ? IA32_EDX : dst_hi;
|
||||
u8 sreg_lo = sstk ? IA32_ECX : src_lo;
|
||||
|
@ -2081,25 +2092,35 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
|
|||
if (dstk) {
|
||||
EMIT3(0x8B, add_2reg(0x40, IA32_EBP, IA32_EAX),
|
||||
STACK_VAR(dst_lo));
|
||||
EMIT3(0x8B, add_2reg(0x40, IA32_EBP, IA32_EDX),
|
||||
STACK_VAR(dst_hi));
|
||||
if (is_jmp64)
|
||||
EMIT3(0x8B,
|
||||
add_2reg(0x40, IA32_EBP,
|
||||
IA32_EDX),
|
||||
STACK_VAR(dst_hi));
|
||||
}
|
||||
|
||||
if (sstk) {
|
||||
EMIT3(0x8B, add_2reg(0x40, IA32_EBP, IA32_ECX),
|
||||
STACK_VAR(src_lo));
|
||||
EMIT3(0x8B, add_2reg(0x40, IA32_EBP, IA32_EBX),
|
||||
STACK_VAR(src_hi));
|
||||
if (is_jmp64)
|
||||
EMIT3(0x8B,
|
||||
add_2reg(0x40, IA32_EBP,
|
||||
IA32_EBX),
|
||||
STACK_VAR(src_hi));
|
||||
}
|
||||
|
||||
/* cmp dreg_hi,sreg_hi */
|
||||
EMIT2(0x39, add_2reg(0xC0, dreg_hi, sreg_hi));
|
||||
EMIT2(IA32_JNE, 2);
|
||||
if (is_jmp64) {
|
||||
/* cmp dreg_hi,sreg_hi */
|
||||
EMIT2(0x39, add_2reg(0xC0, dreg_hi, sreg_hi));
|
||||
EMIT2(IA32_JNE, 2);
|
||||
}
|
||||
/* cmp dreg_lo,sreg_lo */
|
||||
EMIT2(0x39, add_2reg(0xC0, dreg_lo, sreg_lo));
|
||||
goto emit_cond_jmp;
|
||||
}
|
||||
case BPF_JMP | BPF_JSET | BPF_X: {
|
||||
case BPF_JMP | BPF_JSET | BPF_X:
|
||||
case BPF_JMP32 | BPF_JSET | BPF_X: {
|
||||
bool is_jmp64 = BPF_CLASS(insn->code) == BPF_JMP;
|
||||
u8 dreg_lo = dstk ? IA32_EAX : dst_lo;
|
||||
u8 dreg_hi = dstk ? IA32_EDX : dst_hi;
|
||||
u8 sreg_lo = sstk ? IA32_ECX : src_lo;
|
||||
|
@ -2108,15 +2129,21 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
|
|||
if (dstk) {
|
||||
EMIT3(0x8B, add_2reg(0x40, IA32_EBP, IA32_EAX),
|
||||
STACK_VAR(dst_lo));
|
||||
EMIT3(0x8B, add_2reg(0x40, IA32_EBP, IA32_EDX),
|
||||
STACK_VAR(dst_hi));
|
||||
if (is_jmp64)
|
||||
EMIT3(0x8B,
|
||||
add_2reg(0x40, IA32_EBP,
|
||||
IA32_EDX),
|
||||
STACK_VAR(dst_hi));
|
||||
}
|
||||
|
||||
if (sstk) {
|
||||
EMIT3(0x8B, add_2reg(0x40, IA32_EBP, IA32_ECX),
|
||||
STACK_VAR(src_lo));
|
||||
EMIT3(0x8B, add_2reg(0x40, IA32_EBP, IA32_EBX),
|
||||
STACK_VAR(src_hi));
|
||||
if (is_jmp64)
|
||||
EMIT3(0x8B,
|
||||
add_2reg(0x40, IA32_EBP,
|
||||
IA32_EBX),
|
||||
STACK_VAR(src_hi));
|
||||
}
|
||||
/* and dreg_lo,sreg_lo */
|
||||
EMIT2(0x23, add_2reg(0xC0, sreg_lo, dreg_lo));
|
||||
|
@ -2126,32 +2153,39 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
|
|||
EMIT2(0x09, add_2reg(0xC0, dreg_lo, dreg_hi));
|
||||
goto emit_cond_jmp;
|
||||
}
|
||||
case BPF_JMP | BPF_JSET | BPF_K: {
|
||||
u32 hi;
|
||||
case BPF_JMP | BPF_JSET | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSET | BPF_K: {
|
||||
bool is_jmp64 = BPF_CLASS(insn->code) == BPF_JMP;
|
||||
u8 dreg_lo = dstk ? IA32_EAX : dst_lo;
|
||||
u8 dreg_hi = dstk ? IA32_EDX : dst_hi;
|
||||
u8 sreg_lo = IA32_ECX;
|
||||
u8 sreg_hi = IA32_EBX;
|
||||
u32 hi;
|
||||
|
||||
if (dstk) {
|
||||
EMIT3(0x8B, add_2reg(0x40, IA32_EBP, IA32_EAX),
|
||||
STACK_VAR(dst_lo));
|
||||
EMIT3(0x8B, add_2reg(0x40, IA32_EBP, IA32_EDX),
|
||||
STACK_VAR(dst_hi));
|
||||
if (is_jmp64)
|
||||
EMIT3(0x8B,
|
||||
add_2reg(0x40, IA32_EBP,
|
||||
IA32_EDX),
|
||||
STACK_VAR(dst_hi));
|
||||
}
|
||||
hi = imm32 & (1<<31) ? (u32)~0 : 0;
|
||||
|
||||
/* mov ecx,imm32 */
|
||||
EMIT2_off32(0xC7, add_1reg(0xC0, IA32_ECX), imm32);
|
||||
/* mov ebx,imm32 */
|
||||
EMIT2_off32(0xC7, add_1reg(0xC0, IA32_EBX), hi);
|
||||
EMIT2_off32(0xC7, add_1reg(0xC0, sreg_lo), imm32);
|
||||
|
||||
/* and dreg_lo,sreg_lo */
|
||||
EMIT2(0x23, add_2reg(0xC0, sreg_lo, dreg_lo));
|
||||
/* and dreg_hi,sreg_hi */
|
||||
EMIT2(0x23, add_2reg(0xC0, sreg_hi, dreg_hi));
|
||||
/* or dreg_lo,dreg_hi */
|
||||
EMIT2(0x09, add_2reg(0xC0, dreg_lo, dreg_hi));
|
||||
if (is_jmp64) {
|
||||
hi = imm32 & (1 << 31) ? (u32)~0 : 0;
|
||||
/* mov ebx,imm32 */
|
||||
EMIT2_off32(0xC7, add_1reg(0xC0, sreg_hi), hi);
|
||||
/* and dreg_hi,sreg_hi */
|
||||
EMIT2(0x23, add_2reg(0xC0, sreg_hi, dreg_hi));
|
||||
/* or dreg_lo,dreg_hi */
|
||||
EMIT2(0x09, add_2reg(0xC0, dreg_lo, dreg_hi));
|
||||
}
|
||||
goto emit_cond_jmp;
|
||||
}
|
||||
case BPF_JMP | BPF_JEQ | BPF_K:
|
||||
|
@ -2163,29 +2197,44 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
|
|||
case BPF_JMP | BPF_JSGT | BPF_K:
|
||||
case BPF_JMP | BPF_JSLE | BPF_K:
|
||||
case BPF_JMP | BPF_JSLT | BPF_K:
|
||||
case BPF_JMP | BPF_JSGE | BPF_K: {
|
||||
u32 hi;
|
||||
case BPF_JMP | BPF_JSGE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JEQ | BPF_K:
|
||||
case BPF_JMP32 | BPF_JNE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JGT | BPF_K:
|
||||
case BPF_JMP32 | BPF_JLT | BPF_K:
|
||||
case BPF_JMP32 | BPF_JGE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JLE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSGT | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSLE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSLT | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSGE | BPF_K: {
|
||||
bool is_jmp64 = BPF_CLASS(insn->code) == BPF_JMP;
|
||||
u8 dreg_lo = dstk ? IA32_EAX : dst_lo;
|
||||
u8 dreg_hi = dstk ? IA32_EDX : dst_hi;
|
||||
u8 sreg_lo = IA32_ECX;
|
||||
u8 sreg_hi = IA32_EBX;
|
||||
u32 hi;
|
||||
|
||||
if (dstk) {
|
||||
EMIT3(0x8B, add_2reg(0x40, IA32_EBP, IA32_EAX),
|
||||
STACK_VAR(dst_lo));
|
||||
EMIT3(0x8B, add_2reg(0x40, IA32_EBP, IA32_EDX),
|
||||
STACK_VAR(dst_hi));
|
||||
if (is_jmp64)
|
||||
EMIT3(0x8B,
|
||||
add_2reg(0x40, IA32_EBP,
|
||||
IA32_EDX),
|
||||
STACK_VAR(dst_hi));
|
||||
}
|
||||
|
||||
hi = imm32 & (1<<31) ? (u32)~0 : 0;
|
||||
/* mov ecx,imm32 */
|
||||
EMIT2_off32(0xC7, add_1reg(0xC0, IA32_ECX), imm32);
|
||||
/* mov ebx,imm32 */
|
||||
EMIT2_off32(0xC7, add_1reg(0xC0, IA32_EBX), hi);
|
||||
|
||||
/* cmp dreg_hi,sreg_hi */
|
||||
EMIT2(0x39, add_2reg(0xC0, dreg_hi, sreg_hi));
|
||||
EMIT2(IA32_JNE, 2);
|
||||
if (is_jmp64) {
|
||||
hi = imm32 & (1 << 31) ? (u32)~0 : 0;
|
||||
/* mov ebx,imm32 */
|
||||
EMIT2_off32(0xC7, add_1reg(0xC0, IA32_EBX), hi);
|
||||
/* cmp dreg_hi,sreg_hi */
|
||||
EMIT2(0x39, add_2reg(0xC0, dreg_hi, sreg_hi));
|
||||
EMIT2(IA32_JNE, 2);
|
||||
}
|
||||
/* cmp dreg_lo,sreg_lo */
|
||||
EMIT2(0x39, add_2reg(0xC0, dreg_lo, sreg_lo));
|
||||
|
||||
|
|
|
@ -1266,7 +1266,7 @@ wrp_alu64_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta,
|
|||
u64 imm = insn->imm; /* sign extend */
|
||||
|
||||
if (skip) {
|
||||
meta->skip = true;
|
||||
meta->flags |= FLAG_INSN_SKIP_NOOP;
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -1296,7 +1296,7 @@ wrp_alu32_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta,
|
|||
const struct bpf_insn *insn = &meta->insn;
|
||||
|
||||
if (skip) {
|
||||
meta->skip = true;
|
||||
meta->flags |= FLAG_INSN_SKIP_NOOP;
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -1334,8 +1334,9 @@ wrp_test_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta,
|
|||
|
||||
wrp_test_reg_one(nfp_prog, insn->dst_reg * 2, alu_op,
|
||||
insn->src_reg * 2, br_mask, insn->off);
|
||||
wrp_test_reg_one(nfp_prog, insn->dst_reg * 2 + 1, alu_op,
|
||||
insn->src_reg * 2 + 1, br_mask, insn->off);
|
||||
if (is_mbpf_jmp64(meta))
|
||||
wrp_test_reg_one(nfp_prog, insn->dst_reg * 2 + 1, alu_op,
|
||||
insn->src_reg * 2 + 1, br_mask, insn->off);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -1390,13 +1391,15 @@ static int cmp_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
|
|||
else
|
||||
emit_alu(nfp_prog, reg_none(), tmp_reg, alu_op, reg_a(reg));
|
||||
|
||||
tmp_reg = ur_load_imm_any(nfp_prog, imm >> 32, imm_b(nfp_prog));
|
||||
if (!code->swap)
|
||||
emit_alu(nfp_prog, reg_none(),
|
||||
reg_a(reg + 1), carry_op, tmp_reg);
|
||||
else
|
||||
emit_alu(nfp_prog, reg_none(),
|
||||
tmp_reg, carry_op, reg_a(reg + 1));
|
||||
if (is_mbpf_jmp64(meta)) {
|
||||
tmp_reg = ur_load_imm_any(nfp_prog, imm >> 32, imm_b(nfp_prog));
|
||||
if (!code->swap)
|
||||
emit_alu(nfp_prog, reg_none(),
|
||||
reg_a(reg + 1), carry_op, tmp_reg);
|
||||
else
|
||||
emit_alu(nfp_prog, reg_none(),
|
||||
tmp_reg, carry_op, reg_a(reg + 1));
|
||||
}
|
||||
|
||||
emit_br(nfp_prog, code->br_mask, insn->off, 0);
|
||||
|
||||
|
@ -1423,8 +1426,9 @@ static int cmp_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
|
|||
}
|
||||
|
||||
emit_alu(nfp_prog, reg_none(), reg_a(areg), ALU_OP_SUB, reg_b(breg));
|
||||
emit_alu(nfp_prog, reg_none(),
|
||||
reg_a(areg + 1), ALU_OP_SUB_C, reg_b(breg + 1));
|
||||
if (is_mbpf_jmp64(meta))
|
||||
emit_alu(nfp_prog, reg_none(),
|
||||
reg_a(areg + 1), ALU_OP_SUB_C, reg_b(breg + 1));
|
||||
emit_br(nfp_prog, code->br_mask, insn->off, 0);
|
||||
|
||||
return 0;
|
||||
|
@ -3048,6 +3052,19 @@ static int jeq_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int jeq32_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
|
||||
{
|
||||
const struct bpf_insn *insn = &meta->insn;
|
||||
swreg tmp_reg;
|
||||
|
||||
tmp_reg = ur_load_imm_any(nfp_prog, insn->imm, imm_b(nfp_prog));
|
||||
emit_alu(nfp_prog, reg_none(),
|
||||
reg_a(insn->dst_reg * 2), ALU_OP_XOR, tmp_reg);
|
||||
emit_br(nfp_prog, BR_BEQ, insn->off, 0);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int jset_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
|
||||
{
|
||||
const struct bpf_insn *insn = &meta->insn;
|
||||
|
@ -3061,9 +3078,10 @@ static int jset_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
|
|||
/* Upper word of the mask can only be 0 or ~0 from sign extension,
|
||||
* so either ignore it or OR the whole thing in.
|
||||
*/
|
||||
if (imm >> 32)
|
||||
if (is_mbpf_jmp64(meta) && imm >> 32) {
|
||||
emit_alu(nfp_prog, reg_none(),
|
||||
reg_a(dst_gpr + 1), ALU_OP_OR, imm_b(nfp_prog));
|
||||
}
|
||||
emit_br(nfp_prog, BR_BNE, insn->off, 0);
|
||||
|
||||
return 0;
|
||||
|
@ -3073,11 +3091,16 @@ static int jne_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
|
|||
{
|
||||
const struct bpf_insn *insn = &meta->insn;
|
||||
u64 imm = insn->imm; /* sign extend */
|
||||
bool is_jmp32 = is_mbpf_jmp32(meta);
|
||||
swreg tmp_reg;
|
||||
|
||||
if (!imm) {
|
||||
emit_alu(nfp_prog, reg_none(), reg_a(insn->dst_reg * 2),
|
||||
ALU_OP_OR, reg_b(insn->dst_reg * 2 + 1));
|
||||
if (is_jmp32)
|
||||
emit_alu(nfp_prog, reg_none(), reg_none(), ALU_OP_NONE,
|
||||
reg_b(insn->dst_reg * 2));
|
||||
else
|
||||
emit_alu(nfp_prog, reg_none(), reg_a(insn->dst_reg * 2),
|
||||
ALU_OP_OR, reg_b(insn->dst_reg * 2 + 1));
|
||||
emit_br(nfp_prog, BR_BNE, insn->off, 0);
|
||||
return 0;
|
||||
}
|
||||
|
@ -3087,6 +3110,9 @@ static int jne_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
|
|||
reg_a(insn->dst_reg * 2), ALU_OP_XOR, tmp_reg);
|
||||
emit_br(nfp_prog, BR_BNE, insn->off, 0);
|
||||
|
||||
if (is_jmp32)
|
||||
return 0;
|
||||
|
||||
tmp_reg = ur_load_imm_any(nfp_prog, imm >> 32, imm_b(nfp_prog));
|
||||
emit_alu(nfp_prog, reg_none(),
|
||||
reg_a(insn->dst_reg * 2 + 1), ALU_OP_XOR, tmp_reg);
|
||||
|
@ -3101,10 +3127,13 @@ static int jeq_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
|
|||
|
||||
emit_alu(nfp_prog, imm_a(nfp_prog), reg_a(insn->dst_reg * 2),
|
||||
ALU_OP_XOR, reg_b(insn->src_reg * 2));
|
||||
emit_alu(nfp_prog, imm_b(nfp_prog), reg_a(insn->dst_reg * 2 + 1),
|
||||
ALU_OP_XOR, reg_b(insn->src_reg * 2 + 1));
|
||||
emit_alu(nfp_prog, reg_none(),
|
||||
imm_a(nfp_prog), ALU_OP_OR, imm_b(nfp_prog));
|
||||
if (is_mbpf_jmp64(meta)) {
|
||||
emit_alu(nfp_prog, imm_b(nfp_prog),
|
||||
reg_a(insn->dst_reg * 2 + 1), ALU_OP_XOR,
|
||||
reg_b(insn->src_reg * 2 + 1));
|
||||
emit_alu(nfp_prog, reg_none(), imm_a(nfp_prog), ALU_OP_OR,
|
||||
imm_b(nfp_prog));
|
||||
}
|
||||
emit_br(nfp_prog, BR_BEQ, insn->off, 0);
|
||||
|
||||
return 0;
|
||||
|
@ -3182,7 +3211,7 @@ bpf_to_bpf_call(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
|
|||
wrp_immed_relo(nfp_prog, imm_b(nfp_prog), 0, RELO_IMMED_REL);
|
||||
} else {
|
||||
ret_tgt = nfp_prog_current_offset(nfp_prog) + 2;
|
||||
emit_br(nfp_prog, BR_UNC, meta->n + 1 + meta->insn.imm, 1);
|
||||
emit_br(nfp_prog, BR_UNC, meta->insn.imm, 1);
|
||||
offset_br = nfp_prog_current_offset(nfp_prog);
|
||||
}
|
||||
wrp_immed_relo(nfp_prog, ret_reg(nfp_prog), ret_tgt, RELO_IMMED_REL);
|
||||
|
@ -3369,6 +3398,28 @@ static const instr_cb_t instr_cb[256] = {
|
|||
[BPF_JMP | BPF_JSLE | BPF_X] = cmp_reg,
|
||||
[BPF_JMP | BPF_JSET | BPF_X] = jset_reg,
|
||||
[BPF_JMP | BPF_JNE | BPF_X] = jne_reg,
|
||||
[BPF_JMP32 | BPF_JEQ | BPF_K] = jeq32_imm,
|
||||
[BPF_JMP32 | BPF_JGT | BPF_K] = cmp_imm,
|
||||
[BPF_JMP32 | BPF_JGE | BPF_K] = cmp_imm,
|
||||
[BPF_JMP32 | BPF_JLT | BPF_K] = cmp_imm,
|
||||
[BPF_JMP32 | BPF_JLE | BPF_K] = cmp_imm,
|
||||
[BPF_JMP32 | BPF_JSGT | BPF_K] =cmp_imm,
|
||||
[BPF_JMP32 | BPF_JSGE | BPF_K] =cmp_imm,
|
||||
[BPF_JMP32 | BPF_JSLT | BPF_K] =cmp_imm,
|
||||
[BPF_JMP32 | BPF_JSLE | BPF_K] =cmp_imm,
|
||||
[BPF_JMP32 | BPF_JSET | BPF_K] =jset_imm,
|
||||
[BPF_JMP32 | BPF_JNE | BPF_K] = jne_imm,
|
||||
[BPF_JMP32 | BPF_JEQ | BPF_X] = jeq_reg,
|
||||
[BPF_JMP32 | BPF_JGT | BPF_X] = cmp_reg,
|
||||
[BPF_JMP32 | BPF_JGE | BPF_X] = cmp_reg,
|
||||
[BPF_JMP32 | BPF_JLT | BPF_X] = cmp_reg,
|
||||
[BPF_JMP32 | BPF_JLE | BPF_X] = cmp_reg,
|
||||
[BPF_JMP32 | BPF_JSGT | BPF_X] =cmp_reg,
|
||||
[BPF_JMP32 | BPF_JSGE | BPF_X] =cmp_reg,
|
||||
[BPF_JMP32 | BPF_JSLT | BPF_X] =cmp_reg,
|
||||
[BPF_JMP32 | BPF_JSLE | BPF_X] =cmp_reg,
|
||||
[BPF_JMP32 | BPF_JSET | BPF_X] =jset_reg,
|
||||
[BPF_JMP32 | BPF_JNE | BPF_X] = jne_reg,
|
||||
[BPF_JMP | BPF_CALL] = call,
|
||||
[BPF_JMP | BPF_EXIT] = jmp_exit,
|
||||
};
|
||||
|
@ -3395,9 +3446,9 @@ static int nfp_fixup_branches(struct nfp_prog *nfp_prog)
|
|||
int err;
|
||||
|
||||
list_for_each_entry(meta, &nfp_prog->insns, l) {
|
||||
if (meta->skip)
|
||||
if (meta->flags & FLAG_INSN_SKIP_MASK)
|
||||
continue;
|
||||
if (BPF_CLASS(meta->insn.code) != BPF_JMP)
|
||||
if (!is_mbpf_jmp(meta))
|
||||
continue;
|
||||
if (meta->insn.code == (BPF_JMP | BPF_EXIT) &&
|
||||
!nfp_is_main_function(meta))
|
||||
|
@ -3439,7 +3490,7 @@ static int nfp_fixup_branches(struct nfp_prog *nfp_prog)
|
|||
|
||||
jmp_dst = meta->jmp_dst;
|
||||
|
||||
if (jmp_dst->skip) {
|
||||
if (jmp_dst->flags & FLAG_INSN_SKIP_PREC_DEPENDENT) {
|
||||
pr_err("Branch landing on removed instruction!!\n");
|
||||
return -ELOOP;
|
||||
}
|
||||
|
@ -3689,7 +3740,7 @@ static int nfp_translate(struct nfp_prog *nfp_prog)
|
|||
return nfp_prog->error;
|
||||
}
|
||||
|
||||
if (meta->skip) {
|
||||
if (meta->flags & FLAG_INSN_SKIP_MASK) {
|
||||
nfp_prog->n_translated++;
|
||||
continue;
|
||||
}
|
||||
|
@ -3737,10 +3788,10 @@ static void nfp_bpf_opt_reg_init(struct nfp_prog *nfp_prog)
|
|||
/* Programs start with R6 = R1 but we ignore the skb pointer */
|
||||
if (insn.code == (BPF_ALU64 | BPF_MOV | BPF_X) &&
|
||||
insn.src_reg == 1 && insn.dst_reg == 6)
|
||||
meta->skip = true;
|
||||
meta->flags |= FLAG_INSN_SKIP_PREC_DEPENDENT;
|
||||
|
||||
/* Return as soon as something doesn't match */
|
||||
if (!meta->skip)
|
||||
if (!(meta->flags & FLAG_INSN_SKIP_MASK))
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
@ -3755,19 +3806,17 @@ static void nfp_bpf_opt_neg_add_sub(struct nfp_prog *nfp_prog)
|
|||
list_for_each_entry(meta, &nfp_prog->insns, l) {
|
||||
struct bpf_insn insn = meta->insn;
|
||||
|
||||
if (meta->skip)
|
||||
if (meta->flags & FLAG_INSN_SKIP_MASK)
|
||||
continue;
|
||||
|
||||
if (BPF_CLASS(insn.code) != BPF_ALU &&
|
||||
BPF_CLASS(insn.code) != BPF_ALU64 &&
|
||||
BPF_CLASS(insn.code) != BPF_JMP)
|
||||
if (!is_mbpf_alu(meta) && !is_mbpf_jmp(meta))
|
||||
continue;
|
||||
if (BPF_SRC(insn.code) != BPF_K)
|
||||
continue;
|
||||
if (insn.imm >= 0)
|
||||
continue;
|
||||
|
||||
if (BPF_CLASS(insn.code) == BPF_JMP) {
|
||||
if (is_mbpf_jmp(meta)) {
|
||||
switch (BPF_OP(insn.code)) {
|
||||
case BPF_JGE:
|
||||
case BPF_JSGE:
|
||||
|
@ -3829,7 +3878,7 @@ static void nfp_bpf_opt_ld_mask(struct nfp_prog *nfp_prog)
|
|||
if (meta2->flags & FLAG_INSN_IS_JUMP_DST)
|
||||
continue;
|
||||
|
||||
meta2->skip = true;
|
||||
meta2->flags |= FLAG_INSN_SKIP_PREC_DEPENDENT;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -3869,8 +3918,8 @@ static void nfp_bpf_opt_ld_shift(struct nfp_prog *nfp_prog)
|
|||
meta3->flags & FLAG_INSN_IS_JUMP_DST)
|
||||
continue;
|
||||
|
||||
meta2->skip = true;
|
||||
meta3->skip = true;
|
||||
meta2->flags |= FLAG_INSN_SKIP_PREC_DEPENDENT;
|
||||
meta3->flags |= FLAG_INSN_SKIP_PREC_DEPENDENT;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -4065,7 +4114,8 @@ static void nfp_bpf_opt_ldst_gather(struct nfp_prog *nfp_prog)
|
|||
}
|
||||
|
||||
head_ld_meta->paired_st = &head_st_meta->insn;
|
||||
head_st_meta->skip = true;
|
||||
head_st_meta->flags |=
|
||||
FLAG_INSN_SKIP_PREC_DEPENDENT;
|
||||
} else {
|
||||
head_ld_meta->ldst_gather_len = 0;
|
||||
}
|
||||
|
@ -4098,8 +4148,8 @@ static void nfp_bpf_opt_ldst_gather(struct nfp_prog *nfp_prog)
|
|||
head_ld_meta = meta1;
|
||||
head_st_meta = meta2;
|
||||
} else {
|
||||
meta1->skip = true;
|
||||
meta2->skip = true;
|
||||
meta1->flags |= FLAG_INSN_SKIP_PREC_DEPENDENT;
|
||||
meta2->flags |= FLAG_INSN_SKIP_PREC_DEPENDENT;
|
||||
}
|
||||
|
||||
head_ld_meta->ldst_gather_len += BPF_LDST_BYTES(ld);
|
||||
|
@ -4124,7 +4174,7 @@ static void nfp_bpf_opt_pkt_cache(struct nfp_prog *nfp_prog)
|
|||
if (meta->flags & FLAG_INSN_IS_JUMP_DST)
|
||||
cache_avail = false;
|
||||
|
||||
if (meta->skip)
|
||||
if (meta->flags & FLAG_INSN_SKIP_MASK)
|
||||
continue;
|
||||
|
||||
insn = &meta->insn;
|
||||
|
@ -4210,7 +4260,7 @@ start_new:
|
|||
}
|
||||
|
||||
list_for_each_entry(meta, &nfp_prog->insns, l) {
|
||||
if (meta->skip)
|
||||
if (meta->flags & FLAG_INSN_SKIP_MASK)
|
||||
continue;
|
||||
|
||||
if (is_mbpf_load_pkt(meta) && !meta->ldst_gather_len) {
|
||||
|
@ -4246,7 +4296,8 @@ static int nfp_bpf_replace_map_ptrs(struct nfp_prog *nfp_prog)
|
|||
u32 id;
|
||||
|
||||
nfp_for_each_insn_walk2(nfp_prog, meta1, meta2) {
|
||||
if (meta1->skip || meta2->skip)
|
||||
if (meta1->flags & FLAG_INSN_SKIP_MASK ||
|
||||
meta2->flags & FLAG_INSN_SKIP_MASK)
|
||||
continue;
|
||||
|
||||
if (meta1->insn.code != (BPF_LD | BPF_IMM | BPF_DW) ||
|
||||
|
@ -4325,7 +4376,7 @@ int nfp_bpf_jit(struct nfp_prog *nfp_prog)
|
|||
return ret;
|
||||
}
|
||||
|
||||
void nfp_bpf_jit_prepare(struct nfp_prog *nfp_prog, unsigned int cnt)
|
||||
void nfp_bpf_jit_prepare(struct nfp_prog *nfp_prog)
|
||||
{
|
||||
struct nfp_insn_meta *meta;
|
||||
|
||||
|
@ -4336,7 +4387,7 @@ void nfp_bpf_jit_prepare(struct nfp_prog *nfp_prog, unsigned int cnt)
|
|||
unsigned int dst_idx;
|
||||
bool pseudo_call;
|
||||
|
||||
if (BPF_CLASS(code) != BPF_JMP)
|
||||
if (!is_mbpf_jmp(meta))
|
||||
continue;
|
||||
if (BPF_OP(code) == BPF_EXIT)
|
||||
continue;
|
||||
|
@ -4353,7 +4404,7 @@ void nfp_bpf_jit_prepare(struct nfp_prog *nfp_prog, unsigned int cnt)
|
|||
else
|
||||
dst_idx = meta->n + 1 + meta->insn.off;
|
||||
|
||||
dst_meta = nfp_bpf_goto_meta(nfp_prog, meta, dst_idx, cnt);
|
||||
dst_meta = nfp_bpf_goto_meta(nfp_prog, meta, dst_idx);
|
||||
|
||||
if (pseudo_call)
|
||||
dst_meta->flags |= FLAG_INSN_IS_SUBPROG_START;
|
||||
|
|
|
@ -243,6 +243,16 @@ struct nfp_bpf_reg_state {
|
|||
#define FLAG_INSN_IS_JUMP_DST BIT(0)
|
||||
#define FLAG_INSN_IS_SUBPROG_START BIT(1)
|
||||
#define FLAG_INSN_PTR_CALLER_STACK_FRAME BIT(2)
|
||||
/* Instruction is pointless, noop even on its own */
|
||||
#define FLAG_INSN_SKIP_NOOP BIT(3)
|
||||
/* Instruction is optimized out based on preceding instructions */
|
||||
#define FLAG_INSN_SKIP_PREC_DEPENDENT BIT(4)
|
||||
/* Instruction is optimized by the verifier */
|
||||
#define FLAG_INSN_SKIP_VERIFIER_OPT BIT(5)
|
||||
|
||||
#define FLAG_INSN_SKIP_MASK (FLAG_INSN_SKIP_NOOP | \
|
||||
FLAG_INSN_SKIP_PREC_DEPENDENT | \
|
||||
FLAG_INSN_SKIP_VERIFIER_OPT)
|
||||
|
||||
/**
|
||||
* struct nfp_insn_meta - BPF instruction wrapper
|
||||
|
@ -271,7 +281,6 @@ struct nfp_bpf_reg_state {
|
|||
* @n: eBPF instruction number
|
||||
* @flags: eBPF instruction extra optimization flags
|
||||
* @subprog_idx: index of subprogram to which the instruction belongs
|
||||
* @skip: skip this instruction (optimized out)
|
||||
* @double_cb: callback for second part of the instruction
|
||||
* @l: link on nfp_prog->insns list
|
||||
*/
|
||||
|
@ -319,7 +328,6 @@ struct nfp_insn_meta {
|
|||
unsigned short n;
|
||||
unsigned short flags;
|
||||
unsigned short subprog_idx;
|
||||
bool skip;
|
||||
instr_cb_t double_cb;
|
||||
|
||||
struct list_head l;
|
||||
|
@ -357,6 +365,21 @@ static inline bool is_mbpf_load(const struct nfp_insn_meta *meta)
|
|||
return (meta->insn.code & ~BPF_SIZE_MASK) == (BPF_LDX | BPF_MEM);
|
||||
}
|
||||
|
||||
static inline bool is_mbpf_jmp32(const struct nfp_insn_meta *meta)
|
||||
{
|
||||
return mbpf_class(meta) == BPF_JMP32;
|
||||
}
|
||||
|
||||
static inline bool is_mbpf_jmp64(const struct nfp_insn_meta *meta)
|
||||
{
|
||||
return mbpf_class(meta) == BPF_JMP;
|
||||
}
|
||||
|
||||
static inline bool is_mbpf_jmp(const struct nfp_insn_meta *meta)
|
||||
{
|
||||
return is_mbpf_jmp32(meta) || is_mbpf_jmp64(meta);
|
||||
}
|
||||
|
||||
static inline bool is_mbpf_store(const struct nfp_insn_meta *meta)
|
||||
{
|
||||
return (meta->insn.code & ~BPF_SIZE_MASK) == (BPF_STX | BPF_MEM);
|
||||
|
@ -407,6 +430,20 @@ static inline bool is_mbpf_div(const struct nfp_insn_meta *meta)
|
|||
return is_mbpf_alu(meta) && mbpf_op(meta) == BPF_DIV;
|
||||
}
|
||||
|
||||
static inline bool is_mbpf_cond_jump(const struct nfp_insn_meta *meta)
|
||||
{
|
||||
u8 op;
|
||||
|
||||
if (is_mbpf_jmp32(meta))
|
||||
return true;
|
||||
|
||||
if (!is_mbpf_jmp64(meta))
|
||||
return false;
|
||||
|
||||
op = mbpf_op(meta);
|
||||
return op != BPF_JA && op != BPF_EXIT && op != BPF_CALL;
|
||||
}
|
||||
|
||||
static inline bool is_mbpf_helper_call(const struct nfp_insn_meta *meta)
|
||||
{
|
||||
struct bpf_insn insn = meta->insn;
|
||||
|
@ -457,6 +494,7 @@ struct nfp_bpf_subprog_info {
|
|||
* @subprog_cnt: number of sub-programs, including main function
|
||||
* @map_records: the map record pointers from bpf->maps_neutral
|
||||
* @subprog: pointer to an array of objects holding info about sub-programs
|
||||
* @n_insns: number of instructions on @insns list
|
||||
* @insns: list of BPF instruction wrappers (struct nfp_insn_meta)
|
||||
*/
|
||||
struct nfp_prog {
|
||||
|
@ -489,6 +527,7 @@ struct nfp_prog {
|
|||
struct nfp_bpf_neutral_map **map_records;
|
||||
struct nfp_bpf_subprog_info *subprog;
|
||||
|
||||
unsigned int n_insns;
|
||||
struct list_head insns;
|
||||
};
|
||||
|
||||
|
@ -505,7 +544,7 @@ struct nfp_bpf_vnic {
|
|||
};
|
||||
|
||||
bool nfp_is_subprog_start(struct nfp_insn_meta *meta);
|
||||
void nfp_bpf_jit_prepare(struct nfp_prog *nfp_prog, unsigned int cnt);
|
||||
void nfp_bpf_jit_prepare(struct nfp_prog *nfp_prog);
|
||||
int nfp_bpf_jit(struct nfp_prog *prog);
|
||||
bool nfp_bpf_supported_opcode(u8 code);
|
||||
|
||||
|
@ -513,6 +552,10 @@ int nfp_verify_insn(struct bpf_verifier_env *env, int insn_idx,
|
|||
int prev_insn_idx);
|
||||
int nfp_bpf_finalize(struct bpf_verifier_env *env);
|
||||
|
||||
int nfp_bpf_opt_replace_insn(struct bpf_verifier_env *env, u32 off,
|
||||
struct bpf_insn *insn);
|
||||
int nfp_bpf_opt_remove_insns(struct bpf_verifier_env *env, u32 off, u32 cnt);
|
||||
|
||||
extern const struct bpf_prog_offload_ops nfp_bpf_dev_ops;
|
||||
|
||||
struct netdev_bpf;
|
||||
|
@ -526,7 +569,7 @@ int nfp_net_bpf_offload(struct nfp_net *nn, struct bpf_prog *prog,
|
|||
|
||||
struct nfp_insn_meta *
|
||||
nfp_bpf_goto_meta(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta,
|
||||
unsigned int insn_idx, unsigned int n_insns);
|
||||
unsigned int insn_idx);
|
||||
|
||||
void *nfp_bpf_relo_for_vnic(struct nfp_prog *nfp_prog, struct nfp_bpf_vnic *bv);
|
||||
|
||||
|
|
|
@ -163,8 +163,9 @@ nfp_prog_prepare(struct nfp_prog *nfp_prog, const struct bpf_insn *prog,
|
|||
|
||||
list_add_tail(&meta->l, &nfp_prog->insns);
|
||||
}
|
||||
nfp_prog->n_insns = cnt;
|
||||
|
||||
nfp_bpf_jit_prepare(nfp_prog, cnt);
|
||||
nfp_bpf_jit_prepare(nfp_prog);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -219,6 +220,10 @@ static int nfp_bpf_translate(struct bpf_prog *prog)
|
|||
unsigned int max_instr;
|
||||
int err;
|
||||
|
||||
/* We depend on dead code elimination succeeding */
|
||||
if (prog->aux->offload->opt_failed)
|
||||
return -EINVAL;
|
||||
|
||||
max_instr = nn_readw(nn, NFP_NET_CFG_BPF_MAX_LEN);
|
||||
nfp_prog->__prog_alloc_len = max_instr * sizeof(u64);
|
||||
|
||||
|
@ -591,6 +596,8 @@ int nfp_net_bpf_offload(struct nfp_net *nn, struct bpf_prog *prog,
|
|||
const struct bpf_prog_offload_ops nfp_bpf_dev_ops = {
|
||||
.insn_hook = nfp_verify_insn,
|
||||
.finalize = nfp_bpf_finalize,
|
||||
.replace_insn = nfp_bpf_opt_replace_insn,
|
||||
.remove_insns = nfp_bpf_opt_remove_insns,
|
||||
.prepare = nfp_bpf_verifier_prep,
|
||||
.translate = nfp_bpf_translate,
|
||||
.destroy = nfp_bpf_destroy,
|
||||
|
|
|
@ -18,15 +18,15 @@
|
|||
|
||||
struct nfp_insn_meta *
|
||||
nfp_bpf_goto_meta(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta,
|
||||
unsigned int insn_idx, unsigned int n_insns)
|
||||
unsigned int insn_idx)
|
||||
{
|
||||
unsigned int forward, backward, i;
|
||||
|
||||
backward = meta->n - insn_idx;
|
||||
forward = insn_idx - meta->n;
|
||||
|
||||
if (min(forward, backward) > n_insns - insn_idx - 1) {
|
||||
backward = n_insns - insn_idx - 1;
|
||||
if (min(forward, backward) > nfp_prog->n_insns - insn_idx - 1) {
|
||||
backward = nfp_prog->n_insns - insn_idx - 1;
|
||||
meta = nfp_prog_last_meta(nfp_prog);
|
||||
}
|
||||
if (min(forward, backward) > insn_idx && backward > insn_idx) {
|
||||
|
@ -629,7 +629,7 @@ int nfp_verify_insn(struct bpf_verifier_env *env, int insn_idx,
|
|||
struct nfp_prog *nfp_prog = env->prog->aux->offload->dev_priv;
|
||||
struct nfp_insn_meta *meta = nfp_prog->verifier_meta;
|
||||
|
||||
meta = nfp_bpf_goto_meta(nfp_prog, meta, insn_idx, env->prog->len);
|
||||
meta = nfp_bpf_goto_meta(nfp_prog, meta, insn_idx);
|
||||
nfp_prog->verifier_meta = meta;
|
||||
|
||||
if (!nfp_bpf_supported_opcode(meta->insn.code)) {
|
||||
|
@ -690,8 +690,7 @@ nfp_assign_subprog_idx_and_regs(struct bpf_verifier_env *env,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static unsigned int
|
||||
nfp_bpf_get_stack_usage(struct nfp_prog *nfp_prog, unsigned int cnt)
|
||||
static unsigned int nfp_bpf_get_stack_usage(struct nfp_prog *nfp_prog)
|
||||
{
|
||||
struct nfp_insn_meta *meta = nfp_prog_first_meta(nfp_prog);
|
||||
unsigned int max_depth = 0, depth = 0, frame = 0;
|
||||
|
@ -726,7 +725,7 @@ continue_subprog:
|
|||
|
||||
/* Find the callee and start processing it. */
|
||||
meta = nfp_bpf_goto_meta(nfp_prog, meta,
|
||||
meta->n + 1 + meta->insn.imm, cnt);
|
||||
meta->n + 1 + meta->insn.imm);
|
||||
idx = meta->subprog_idx;
|
||||
frame++;
|
||||
goto process_subprog;
|
||||
|
@ -778,8 +777,7 @@ int nfp_bpf_finalize(struct bpf_verifier_env *env)
|
|||
|
||||
nn = netdev_priv(env->prog->aux->offload->netdev);
|
||||
max_stack = nn_readb(nn, NFP_NET_CFG_BPF_STACK_SZ) * 64;
|
||||
nfp_prog->stack_size = nfp_bpf_get_stack_usage(nfp_prog,
|
||||
env->prog->len);
|
||||
nfp_prog->stack_size = nfp_bpf_get_stack_usage(nfp_prog);
|
||||
if (nfp_prog->stack_size > max_stack) {
|
||||
pr_vlog(env, "stack too large: program %dB > FW stack %dB\n",
|
||||
nfp_prog->stack_size, max_stack);
|
||||
|
@ -788,3 +786,61 @@ int nfp_bpf_finalize(struct bpf_verifier_env *env)
|
|||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int nfp_bpf_opt_replace_insn(struct bpf_verifier_env *env, u32 off,
|
||||
struct bpf_insn *insn)
|
||||
{
|
||||
struct nfp_prog *nfp_prog = env->prog->aux->offload->dev_priv;
|
||||
struct bpf_insn_aux_data *aux_data = env->insn_aux_data;
|
||||
struct nfp_insn_meta *meta = nfp_prog->verifier_meta;
|
||||
|
||||
meta = nfp_bpf_goto_meta(nfp_prog, meta, aux_data[off].orig_idx);
|
||||
nfp_prog->verifier_meta = meta;
|
||||
|
||||
/* conditional jump to jump conversion */
|
||||
if (is_mbpf_cond_jump(meta) &&
|
||||
insn->code == (BPF_JMP | BPF_JA | BPF_K)) {
|
||||
unsigned int tgt_off;
|
||||
|
||||
tgt_off = off + insn->off + 1;
|
||||
|
||||
if (!insn->off) {
|
||||
meta->jmp_dst = list_next_entry(meta, l);
|
||||
meta->jump_neg_op = false;
|
||||
} else if (meta->jmp_dst->n != aux_data[tgt_off].orig_idx) {
|
||||
pr_vlog(env, "branch hard wire at %d changes target %d -> %d\n",
|
||||
off, meta->jmp_dst->n,
|
||||
aux_data[tgt_off].orig_idx);
|
||||
return -EINVAL;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
pr_vlog(env, "unsupported instruction replacement %hhx -> %hhx\n",
|
||||
meta->insn.code, insn->code);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
int nfp_bpf_opt_remove_insns(struct bpf_verifier_env *env, u32 off, u32 cnt)
|
||||
{
|
||||
struct nfp_prog *nfp_prog = env->prog->aux->offload->dev_priv;
|
||||
struct bpf_insn_aux_data *aux_data = env->insn_aux_data;
|
||||
struct nfp_insn_meta *meta = nfp_prog->verifier_meta;
|
||||
unsigned int i;
|
||||
|
||||
meta = nfp_bpf_goto_meta(nfp_prog, meta, aux_data[off].orig_idx);
|
||||
|
||||
for (i = 0; i < cnt; i++) {
|
||||
if (WARN_ON_ONCE(&meta->l == &nfp_prog->insns))
|
||||
return -EINVAL;
|
||||
|
||||
/* doesn't count if it already has the flag */
|
||||
if (meta->flags & FLAG_INSN_SKIP_VERIFIER_OPT)
|
||||
i--;
|
||||
|
||||
meta->flags |= FLAG_INSN_SKIP_VERIFIER_OPT;
|
||||
meta = list_next_entry(meta, l);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -268,9 +268,15 @@ struct bpf_verifier_ops {
|
|||
};
|
||||
|
||||
struct bpf_prog_offload_ops {
|
||||
/* verifier basic callbacks */
|
||||
int (*insn_hook)(struct bpf_verifier_env *env,
|
||||
int insn_idx, int prev_insn_idx);
|
||||
int (*finalize)(struct bpf_verifier_env *env);
|
||||
/* verifier optimization callbacks (called after .finalize) */
|
||||
int (*replace_insn)(struct bpf_verifier_env *env, u32 off,
|
||||
struct bpf_insn *insn);
|
||||
int (*remove_insns)(struct bpf_verifier_env *env, u32 off, u32 cnt);
|
||||
/* program management callbacks */
|
||||
int (*prepare)(struct bpf_prog *prog);
|
||||
int (*translate)(struct bpf_prog *prog);
|
||||
void (*destroy)(struct bpf_prog *prog);
|
||||
|
@ -283,6 +289,7 @@ struct bpf_prog_offload {
|
|||
void *dev_priv;
|
||||
struct list_head offloads;
|
||||
bool dev_state;
|
||||
bool opt_failed;
|
||||
void *jited_image;
|
||||
u32 jited_len;
|
||||
};
|
||||
|
@ -397,6 +404,9 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr,
|
|||
union bpf_attr __user *uattr);
|
||||
int bpf_prog_test_run_skb(struct bpf_prog *prog, const union bpf_attr *kattr,
|
||||
union bpf_attr __user *uattr);
|
||||
int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
|
||||
const union bpf_attr *kattr,
|
||||
union bpf_attr __user *uattr);
|
||||
|
||||
/* an array of programs to be executed under rcu_lock.
|
||||
*
|
||||
|
|
|
@ -187,6 +187,7 @@ struct bpf_insn_aux_data {
|
|||
int sanitize_stack_off; /* stack slot to be cleared */
|
||||
bool seen; /* this insn was processed by the verifier */
|
||||
u8 alu_state; /* used in combination with alu_limit */
|
||||
unsigned int orig_idx; /* original instruction index */
|
||||
};
|
||||
|
||||
#define MAX_USED_MAPS 64 /* max number of maps accessed by one eBPF program */
|
||||
|
@ -265,5 +266,10 @@ int bpf_prog_offload_verifier_prep(struct bpf_prog *prog);
|
|||
int bpf_prog_offload_verify_insn(struct bpf_verifier_env *env,
|
||||
int insn_idx, int prev_insn_idx);
|
||||
int bpf_prog_offload_finalize(struct bpf_verifier_env *env);
|
||||
void
|
||||
bpf_prog_offload_replace_insn(struct bpf_verifier_env *env, u32 off,
|
||||
struct bpf_insn *insn);
|
||||
void
|
||||
bpf_prog_offload_remove_insns(struct bpf_verifier_env *env, u32 off, u32 cnt);
|
||||
|
||||
#endif /* _LINUX_BPF_VERIFIER_H */
|
||||
|
|
|
@ -277,6 +277,26 @@ struct sock_reuseport;
|
|||
.off = OFF, \
|
||||
.imm = IMM })
|
||||
|
||||
/* Like BPF_JMP_REG, but with 32-bit wide operands for comparison. */
|
||||
|
||||
#define BPF_JMP32_REG(OP, DST, SRC, OFF) \
|
||||
((struct bpf_insn) { \
|
||||
.code = BPF_JMP32 | BPF_OP(OP) | BPF_X, \
|
||||
.dst_reg = DST, \
|
||||
.src_reg = SRC, \
|
||||
.off = OFF, \
|
||||
.imm = 0 })
|
||||
|
||||
/* Like BPF_JMP_IMM, but with 32-bit wide operands for comparison. */
|
||||
|
||||
#define BPF_JMP32_IMM(OP, DST, IMM, OFF) \
|
||||
((struct bpf_insn) { \
|
||||
.code = BPF_JMP32 | BPF_OP(OP) | BPF_K, \
|
||||
.dst_reg = DST, \
|
||||
.src_reg = 0, \
|
||||
.off = OFF, \
|
||||
.imm = IMM })
|
||||
|
||||
/* Unconditional jumps, goto pc + off16 */
|
||||
|
||||
#define BPF_JMP_A(OFF) \
|
||||
|
@ -778,6 +798,7 @@ static inline bool bpf_dump_raw_ok(void)
|
|||
|
||||
struct bpf_prog *bpf_patch_insn_single(struct bpf_prog *prog, u32 off,
|
||||
const struct bpf_insn *patch, u32 len);
|
||||
int bpf_remove_insns(struct bpf_prog *prog, u32 off, u32 cnt);
|
||||
|
||||
void bpf_clear_redirect_map(struct bpf_map *map);
|
||||
|
||||
|
|
|
@ -1221,6 +1221,11 @@ static inline int skb_flow_dissector_bpf_prog_detach(const union bpf_attr *attr)
|
|||
}
|
||||
#endif
|
||||
|
||||
struct bpf_flow_keys;
|
||||
bool __skb_flow_bpf_dissect(struct bpf_prog *prog,
|
||||
const struct sk_buff *skb,
|
||||
struct flow_dissector *flow_dissector,
|
||||
struct bpf_flow_keys *flow_keys);
|
||||
bool __skb_flow_dissect(const struct sk_buff *skb,
|
||||
struct flow_dissector *flow_dissector,
|
||||
void *target_container,
|
||||
|
|
|
@ -31,6 +31,7 @@
|
|||
#include <net/netns/xfrm.h>
|
||||
#include <net/netns/mpls.h>
|
||||
#include <net/netns/can.h>
|
||||
#include <net/netns/xdp.h>
|
||||
#include <linux/ns_common.h>
|
||||
#include <linux/idr.h>
|
||||
#include <linux/skbuff.h>
|
||||
|
@ -160,6 +161,9 @@ struct net {
|
|||
#endif
|
||||
#if IS_ENABLED(CONFIG_CAN)
|
||||
struct netns_can can;
|
||||
#endif
|
||||
#ifdef CONFIG_XDP_SOCKETS
|
||||
struct netns_xdp xdp;
|
||||
#endif
|
||||
struct sock *diag_nlsk;
|
||||
atomic_t fnhe_genid;
|
||||
|
|
|
@ -0,0 +1,13 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#ifndef __NETNS_XDP_H__
|
||||
#define __NETNS_XDP_H__
|
||||
|
||||
#include <linux/rculist.h>
|
||||
#include <linux/mutex.h>
|
||||
|
||||
struct netns_xdp {
|
||||
struct mutex lock;
|
||||
struct hlist_head list;
|
||||
};
|
||||
|
||||
#endif /* __NETNS_XDP_H__ */
|
|
@ -42,6 +42,7 @@ struct xdp_umem {
|
|||
struct work_struct work;
|
||||
struct page **pgs;
|
||||
u32 npgs;
|
||||
int id;
|
||||
struct net_device *dev;
|
||||
struct xdp_umem_fq_reuse *fq_reuse;
|
||||
u16 queue_id;
|
||||
|
|
|
@ -14,6 +14,7 @@
|
|||
/* Extended instruction set based on top of classic BPF */
|
||||
|
||||
/* instruction classes */
|
||||
#define BPF_JMP32 0x06 /* jmp mode in word width */
|
||||
#define BPF_ALU64 0x07 /* alu mode in double word width */
|
||||
|
||||
/* ld/ldx fields */
|
||||
|
@ -2540,6 +2541,7 @@ struct __sk_buff {
|
|||
__bpf_md_ptr(struct bpf_flow_keys *, flow_keys);
|
||||
__u64 tstamp;
|
||||
__u32 wire_len;
|
||||
__u32 gso_segs;
|
||||
};
|
||||
|
||||
struct bpf_tunnel_key {
|
||||
|
|
|
@ -0,0 +1,72 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
|
||||
/*
|
||||
* xdp_diag: interface for query/monitor XDP sockets
|
||||
* Copyright(c) 2019 Intel Corporation.
|
||||
*/
|
||||
|
||||
#ifndef _LINUX_XDP_DIAG_H
|
||||
#define _LINUX_XDP_DIAG_H
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
struct xdp_diag_req {
|
||||
__u8 sdiag_family;
|
||||
__u8 sdiag_protocol;
|
||||
__u16 pad;
|
||||
__u32 xdiag_ino;
|
||||
__u32 xdiag_show;
|
||||
__u32 xdiag_cookie[2];
|
||||
};
|
||||
|
||||
struct xdp_diag_msg {
|
||||
__u8 xdiag_family;
|
||||
__u8 xdiag_type;
|
||||
__u16 pad;
|
||||
__u32 xdiag_ino;
|
||||
__u32 xdiag_cookie[2];
|
||||
};
|
||||
|
||||
#define XDP_SHOW_INFO (1 << 0) /* Basic information */
|
||||
#define XDP_SHOW_RING_CFG (1 << 1)
|
||||
#define XDP_SHOW_UMEM (1 << 2)
|
||||
#define XDP_SHOW_MEMINFO (1 << 3)
|
||||
|
||||
enum {
|
||||
XDP_DIAG_NONE,
|
||||
XDP_DIAG_INFO,
|
||||
XDP_DIAG_UID,
|
||||
XDP_DIAG_RX_RING,
|
||||
XDP_DIAG_TX_RING,
|
||||
XDP_DIAG_UMEM,
|
||||
XDP_DIAG_UMEM_FILL_RING,
|
||||
XDP_DIAG_UMEM_COMPLETION_RING,
|
||||
XDP_DIAG_MEMINFO,
|
||||
__XDP_DIAG_MAX,
|
||||
};
|
||||
|
||||
#define XDP_DIAG_MAX (__XDP_DIAG_MAX - 1)
|
||||
|
||||
struct xdp_diag_info {
|
||||
__u32 ifindex;
|
||||
__u32 queue_id;
|
||||
};
|
||||
|
||||
struct xdp_diag_ring {
|
||||
__u32 entries; /*num descs */
|
||||
};
|
||||
|
||||
#define XDP_DU_F_ZEROCOPY (1 << 0)
|
||||
|
||||
struct xdp_diag_umem {
|
||||
__u64 size;
|
||||
__u32 id;
|
||||
__u32 num_pages;
|
||||
__u32 chunk_size;
|
||||
__u32 headroom;
|
||||
__u32 ifindex;
|
||||
__u32 queue_id;
|
||||
__u32 flags;
|
||||
__u32 refs;
|
||||
};
|
||||
|
||||
#endif /* _LINUX_XDP_DIAG_H */
|
104
kernel/bpf/btf.c
104
kernel/bpf/btf.c
|
@ -157,7 +157,7 @@
|
|||
*
|
||||
*/
|
||||
|
||||
#define BITS_PER_U64 (sizeof(u64) * BITS_PER_BYTE)
|
||||
#define BITS_PER_U128 (sizeof(u64) * BITS_PER_BYTE * 2)
|
||||
#define BITS_PER_BYTE_MASK (BITS_PER_BYTE - 1)
|
||||
#define BITS_PER_BYTE_MASKED(bits) ((bits) & BITS_PER_BYTE_MASK)
|
||||
#define BITS_ROUNDDOWN_BYTES(bits) ((bits) >> 3)
|
||||
|
@ -525,7 +525,7 @@ const struct btf_type *btf_type_by_id(const struct btf *btf, u32 type_id)
|
|||
|
||||
/*
|
||||
* Regular int is not a bit field and it must be either
|
||||
* u8/u16/u32/u64.
|
||||
* u8/u16/u32/u64 or __int128.
|
||||
*/
|
||||
static bool btf_type_int_is_regular(const struct btf_type *t)
|
||||
{
|
||||
|
@ -538,7 +538,8 @@ static bool btf_type_int_is_regular(const struct btf_type *t)
|
|||
if (BITS_PER_BYTE_MASKED(nr_bits) ||
|
||||
BTF_INT_OFFSET(int_data) ||
|
||||
(nr_bytes != sizeof(u8) && nr_bytes != sizeof(u16) &&
|
||||
nr_bytes != sizeof(u32) && nr_bytes != sizeof(u64))) {
|
||||
nr_bytes != sizeof(u32) && nr_bytes != sizeof(u64) &&
|
||||
nr_bytes != (2 * sizeof(u64)))) {
|
||||
return false;
|
||||
}
|
||||
|
||||
|
@ -1063,9 +1064,9 @@ static int btf_int_check_member(struct btf_verifier_env *env,
|
|||
nr_copy_bits = BTF_INT_BITS(int_data) +
|
||||
BITS_PER_BYTE_MASKED(struct_bits_off);
|
||||
|
||||
if (nr_copy_bits > BITS_PER_U64) {
|
||||
if (nr_copy_bits > BITS_PER_U128) {
|
||||
btf_verifier_log_member(env, struct_type, member,
|
||||
"nr_copy_bits exceeds 64");
|
||||
"nr_copy_bits exceeds 128");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
|
@ -1119,9 +1120,9 @@ static int btf_int_check_kflag_member(struct btf_verifier_env *env,
|
|||
|
||||
bytes_offset = BITS_ROUNDDOWN_BYTES(struct_bits_off);
|
||||
nr_copy_bits = nr_bits + BITS_PER_BYTE_MASKED(struct_bits_off);
|
||||
if (nr_copy_bits > BITS_PER_U64) {
|
||||
if (nr_copy_bits > BITS_PER_U128) {
|
||||
btf_verifier_log_member(env, struct_type, member,
|
||||
"nr_copy_bits exceeds 64");
|
||||
"nr_copy_bits exceeds 128");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
|
@ -1168,9 +1169,9 @@ static s32 btf_int_check_meta(struct btf_verifier_env *env,
|
|||
|
||||
nr_bits = BTF_INT_BITS(int_data) + BTF_INT_OFFSET(int_data);
|
||||
|
||||
if (nr_bits > BITS_PER_U64) {
|
||||
if (nr_bits > BITS_PER_U128) {
|
||||
btf_verifier_log_type(env, t, "nr_bits exceeds %zu",
|
||||
BITS_PER_U64);
|
||||
BITS_PER_U128);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
|
@ -1211,31 +1212,93 @@ static void btf_int_log(struct btf_verifier_env *env,
|
|||
btf_int_encoding_str(BTF_INT_ENCODING(int_data)));
|
||||
}
|
||||
|
||||
static void btf_int128_print(struct seq_file *m, void *data)
|
||||
{
|
||||
/* data points to a __int128 number.
|
||||
* Suppose
|
||||
* int128_num = *(__int128 *)data;
|
||||
* The below formulas shows what upper_num and lower_num represents:
|
||||
* upper_num = int128_num >> 64;
|
||||
* lower_num = int128_num & 0xffffffffFFFFFFFFULL;
|
||||
*/
|
||||
u64 upper_num, lower_num;
|
||||
|
||||
#ifdef __BIG_ENDIAN_BITFIELD
|
||||
upper_num = *(u64 *)data;
|
||||
lower_num = *(u64 *)(data + 8);
|
||||
#else
|
||||
upper_num = *(u64 *)(data + 8);
|
||||
lower_num = *(u64 *)data;
|
||||
#endif
|
||||
if (upper_num == 0)
|
||||
seq_printf(m, "0x%llx", lower_num);
|
||||
else
|
||||
seq_printf(m, "0x%llx%016llx", upper_num, lower_num);
|
||||
}
|
||||
|
||||
static void btf_int128_shift(u64 *print_num, u16 left_shift_bits,
|
||||
u16 right_shift_bits)
|
||||
{
|
||||
u64 upper_num, lower_num;
|
||||
|
||||
#ifdef __BIG_ENDIAN_BITFIELD
|
||||
upper_num = print_num[0];
|
||||
lower_num = print_num[1];
|
||||
#else
|
||||
upper_num = print_num[1];
|
||||
lower_num = print_num[0];
|
||||
#endif
|
||||
|
||||
/* shake out un-needed bits by shift/or operations */
|
||||
if (left_shift_bits >= 64) {
|
||||
upper_num = lower_num << (left_shift_bits - 64);
|
||||
lower_num = 0;
|
||||
} else {
|
||||
upper_num = (upper_num << left_shift_bits) |
|
||||
(lower_num >> (64 - left_shift_bits));
|
||||
lower_num = lower_num << left_shift_bits;
|
||||
}
|
||||
|
||||
if (right_shift_bits >= 64) {
|
||||
lower_num = upper_num >> (right_shift_bits - 64);
|
||||
upper_num = 0;
|
||||
} else {
|
||||
lower_num = (lower_num >> right_shift_bits) |
|
||||
(upper_num << (64 - right_shift_bits));
|
||||
upper_num = upper_num >> right_shift_bits;
|
||||
}
|
||||
|
||||
#ifdef __BIG_ENDIAN_BITFIELD
|
||||
print_num[0] = upper_num;
|
||||
print_num[1] = lower_num;
|
||||
#else
|
||||
print_num[0] = lower_num;
|
||||
print_num[1] = upper_num;
|
||||
#endif
|
||||
}
|
||||
|
||||
static void btf_bitfield_seq_show(void *data, u8 bits_offset,
|
||||
u8 nr_bits, struct seq_file *m)
|
||||
{
|
||||
u16 left_shift_bits, right_shift_bits;
|
||||
u8 nr_copy_bytes;
|
||||
u8 nr_copy_bits;
|
||||
u64 print_num;
|
||||
u64 print_num[2] = {};
|
||||
|
||||
nr_copy_bits = nr_bits + bits_offset;
|
||||
nr_copy_bytes = BITS_ROUNDUP_BYTES(nr_copy_bits);
|
||||
|
||||
print_num = 0;
|
||||
memcpy(&print_num, data, nr_copy_bytes);
|
||||
memcpy(print_num, data, nr_copy_bytes);
|
||||
|
||||
#ifdef __BIG_ENDIAN_BITFIELD
|
||||
left_shift_bits = bits_offset;
|
||||
#else
|
||||
left_shift_bits = BITS_PER_U64 - nr_copy_bits;
|
||||
left_shift_bits = BITS_PER_U128 - nr_copy_bits;
|
||||
#endif
|
||||
right_shift_bits = BITS_PER_U64 - nr_bits;
|
||||
right_shift_bits = BITS_PER_U128 - nr_bits;
|
||||
|
||||
print_num <<= left_shift_bits;
|
||||
print_num >>= right_shift_bits;
|
||||
|
||||
seq_printf(m, "0x%llx", print_num);
|
||||
btf_int128_shift(print_num, left_shift_bits, right_shift_bits);
|
||||
btf_int128_print(m, print_num);
|
||||
}
|
||||
|
||||
|
||||
|
@ -1250,7 +1313,7 @@ static void btf_int_bits_seq_show(const struct btf *btf,
|
|||
|
||||
/*
|
||||
* bits_offset is at most 7.
|
||||
* BTF_INT_OFFSET() cannot exceed 64 bits.
|
||||
* BTF_INT_OFFSET() cannot exceed 128 bits.
|
||||
*/
|
||||
total_bits_offset = bits_offset + BTF_INT_OFFSET(int_data);
|
||||
data += BITS_ROUNDDOWN_BYTES(total_bits_offset);
|
||||
|
@ -1274,6 +1337,9 @@ static void btf_int_seq_show(const struct btf *btf, const struct btf_type *t,
|
|||
}
|
||||
|
||||
switch (nr_bits) {
|
||||
case 128:
|
||||
btf_int128_print(m, data);
|
||||
break;
|
||||
case 64:
|
||||
if (sign)
|
||||
seq_printf(m, "%lld", *(s64 *)data);
|
||||
|
|
|
@ -307,15 +307,16 @@ int bpf_prog_calc_tag(struct bpf_prog *fp)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int bpf_adj_delta_to_imm(struct bpf_insn *insn, u32 pos, u32 delta,
|
||||
u32 curr, const bool probe_pass)
|
||||
static int bpf_adj_delta_to_imm(struct bpf_insn *insn, u32 pos, s32 end_old,
|
||||
s32 end_new, u32 curr, const bool probe_pass)
|
||||
{
|
||||
const s64 imm_min = S32_MIN, imm_max = S32_MAX;
|
||||
s32 delta = end_new - end_old;
|
||||
s64 imm = insn->imm;
|
||||
|
||||
if (curr < pos && curr + imm + 1 > pos)
|
||||
if (curr < pos && curr + imm + 1 >= end_old)
|
||||
imm += delta;
|
||||
else if (curr > pos + delta && curr + imm + 1 <= pos + delta)
|
||||
else if (curr >= end_new && curr + imm + 1 < end_new)
|
||||
imm -= delta;
|
||||
if (imm < imm_min || imm > imm_max)
|
||||
return -ERANGE;
|
||||
|
@ -324,15 +325,16 @@ static int bpf_adj_delta_to_imm(struct bpf_insn *insn, u32 pos, u32 delta,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int bpf_adj_delta_to_off(struct bpf_insn *insn, u32 pos, u32 delta,
|
||||
u32 curr, const bool probe_pass)
|
||||
static int bpf_adj_delta_to_off(struct bpf_insn *insn, u32 pos, s32 end_old,
|
||||
s32 end_new, u32 curr, const bool probe_pass)
|
||||
{
|
||||
const s32 off_min = S16_MIN, off_max = S16_MAX;
|
||||
s32 delta = end_new - end_old;
|
||||
s32 off = insn->off;
|
||||
|
||||
if (curr < pos && curr + off + 1 > pos)
|
||||
if (curr < pos && curr + off + 1 >= end_old)
|
||||
off += delta;
|
||||
else if (curr > pos + delta && curr + off + 1 <= pos + delta)
|
||||
else if (curr >= end_new && curr + off + 1 < end_new)
|
||||
off -= delta;
|
||||
if (off < off_min || off > off_max)
|
||||
return -ERANGE;
|
||||
|
@ -341,10 +343,10 @@ static int bpf_adj_delta_to_off(struct bpf_insn *insn, u32 pos, u32 delta,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int bpf_adj_branches(struct bpf_prog *prog, u32 pos, u32 delta,
|
||||
const bool probe_pass)
|
||||
static int bpf_adj_branches(struct bpf_prog *prog, u32 pos, s32 end_old,
|
||||
s32 end_new, const bool probe_pass)
|
||||
{
|
||||
u32 i, insn_cnt = prog->len + (probe_pass ? delta : 0);
|
||||
u32 i, insn_cnt = prog->len + (probe_pass ? end_new - end_old : 0);
|
||||
struct bpf_insn *insn = prog->insnsi;
|
||||
int ret = 0;
|
||||
|
||||
|
@ -356,22 +358,23 @@ static int bpf_adj_branches(struct bpf_prog *prog, u32 pos, u32 delta,
|
|||
* do any other adjustments. Therefore skip the patchlet.
|
||||
*/
|
||||
if (probe_pass && i == pos) {
|
||||
i += delta + 1;
|
||||
insn++;
|
||||
i = end_new;
|
||||
insn = prog->insnsi + end_old;
|
||||
}
|
||||
code = insn->code;
|
||||
if (BPF_CLASS(code) != BPF_JMP ||
|
||||
if ((BPF_CLASS(code) != BPF_JMP &&
|
||||
BPF_CLASS(code) != BPF_JMP32) ||
|
||||
BPF_OP(code) == BPF_EXIT)
|
||||
continue;
|
||||
/* Adjust offset of jmps if we cross patch boundaries. */
|
||||
if (BPF_OP(code) == BPF_CALL) {
|
||||
if (insn->src_reg != BPF_PSEUDO_CALL)
|
||||
continue;
|
||||
ret = bpf_adj_delta_to_imm(insn, pos, delta, i,
|
||||
probe_pass);
|
||||
ret = bpf_adj_delta_to_imm(insn, pos, end_old,
|
||||
end_new, i, probe_pass);
|
||||
} else {
|
||||
ret = bpf_adj_delta_to_off(insn, pos, delta, i,
|
||||
probe_pass);
|
||||
ret = bpf_adj_delta_to_off(insn, pos, end_old,
|
||||
end_new, i, probe_pass);
|
||||
}
|
||||
if (ret)
|
||||
break;
|
||||
|
@ -421,7 +424,7 @@ struct bpf_prog *bpf_patch_insn_single(struct bpf_prog *prog, u32 off,
|
|||
* we afterwards may not fail anymore.
|
||||
*/
|
||||
if (insn_adj_cnt > cnt_max &&
|
||||
bpf_adj_branches(prog, off, insn_delta, true))
|
||||
bpf_adj_branches(prog, off, off + 1, off + len, true))
|
||||
return NULL;
|
||||
|
||||
/* Several new instructions need to be inserted. Make room
|
||||
|
@ -453,13 +456,25 @@ struct bpf_prog *bpf_patch_insn_single(struct bpf_prog *prog, u32 off,
|
|||
* the ship has sailed to reverse to the original state. An
|
||||
* overflow cannot happen at this point.
|
||||
*/
|
||||
BUG_ON(bpf_adj_branches(prog_adj, off, insn_delta, false));
|
||||
BUG_ON(bpf_adj_branches(prog_adj, off, off + 1, off + len, false));
|
||||
|
||||
bpf_adj_linfo(prog_adj, off, insn_delta);
|
||||
|
||||
return prog_adj;
|
||||
}
|
||||
|
||||
int bpf_remove_insns(struct bpf_prog *prog, u32 off, u32 cnt)
|
||||
{
|
||||
/* Branch offsets can't overflow when program is shrinking, no need
|
||||
* to call bpf_adj_branches(..., true) here
|
||||
*/
|
||||
memmove(prog->insnsi + off, prog->insnsi + off + cnt,
|
||||
sizeof(struct bpf_insn) * (prog->len - off - cnt));
|
||||
prog->len -= cnt;
|
||||
|
||||
return WARN_ON_ONCE(bpf_adj_branches(prog, off, off + cnt, off, false));
|
||||
}
|
||||
|
||||
void bpf_prog_kallsyms_del_subprogs(struct bpf_prog *fp)
|
||||
{
|
||||
int i;
|
||||
|
@ -934,6 +949,27 @@ static int bpf_jit_blind_insn(const struct bpf_insn *from,
|
|||
*to++ = BPF_JMP_REG(from->code, from->dst_reg, BPF_REG_AX, off);
|
||||
break;
|
||||
|
||||
case BPF_JMP32 | BPF_JEQ | BPF_K:
|
||||
case BPF_JMP32 | BPF_JNE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JGT | BPF_K:
|
||||
case BPF_JMP32 | BPF_JLT | BPF_K:
|
||||
case BPF_JMP32 | BPF_JGE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JLE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSGT | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSLT | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSGE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSLE | BPF_K:
|
||||
case BPF_JMP32 | BPF_JSET | BPF_K:
|
||||
/* Accommodate for extra offset in case of a backjump. */
|
||||
off = from->off;
|
||||
if (off < 0)
|
||||
off -= 2;
|
||||
*to++ = BPF_ALU32_IMM(BPF_MOV, BPF_REG_AX, imm_rnd ^ from->imm);
|
||||
*to++ = BPF_ALU32_IMM(BPF_XOR, BPF_REG_AX, imm_rnd);
|
||||
*to++ = BPF_JMP32_REG(from->code, from->dst_reg, BPF_REG_AX,
|
||||
off);
|
||||
break;
|
||||
|
||||
case BPF_LD | BPF_IMM | BPF_DW:
|
||||
*to++ = BPF_ALU64_IMM(BPF_MOV, BPF_REG_AX, imm_rnd ^ aux[1].imm);
|
||||
*to++ = BPF_ALU64_IMM(BPF_XOR, BPF_REG_AX, imm_rnd);
|
||||
|
@ -1130,6 +1166,31 @@ EXPORT_SYMBOL_GPL(__bpf_call_base);
|
|||
INSN_2(JMP, CALL), \
|
||||
/* Exit instruction. */ \
|
||||
INSN_2(JMP, EXIT), \
|
||||
/* 32-bit Jump instructions. */ \
|
||||
/* Register based. */ \
|
||||
INSN_3(JMP32, JEQ, X), \
|
||||
INSN_3(JMP32, JNE, X), \
|
||||
INSN_3(JMP32, JGT, X), \
|
||||
INSN_3(JMP32, JLT, X), \
|
||||
INSN_3(JMP32, JGE, X), \
|
||||
INSN_3(JMP32, JLE, X), \
|
||||
INSN_3(JMP32, JSGT, X), \
|
||||
INSN_3(JMP32, JSLT, X), \
|
||||
INSN_3(JMP32, JSGE, X), \
|
||||
INSN_3(JMP32, JSLE, X), \
|
||||
INSN_3(JMP32, JSET, X), \
|
||||
/* Immediate based. */ \
|
||||
INSN_3(JMP32, JEQ, K), \
|
||||
INSN_3(JMP32, JNE, K), \
|
||||
INSN_3(JMP32, JGT, K), \
|
||||
INSN_3(JMP32, JLT, K), \
|
||||
INSN_3(JMP32, JGE, K), \
|
||||
INSN_3(JMP32, JLE, K), \
|
||||
INSN_3(JMP32, JSGT, K), \
|
||||
INSN_3(JMP32, JSLT, K), \
|
||||
INSN_3(JMP32, JSGE, K), \
|
||||
INSN_3(JMP32, JSLE, K), \
|
||||
INSN_3(JMP32, JSET, K), \
|
||||
/* Jump instructions. */ \
|
||||
/* Register based. */ \
|
||||
INSN_3(JMP, JEQ, X), \
|
||||
|
@ -1390,145 +1451,49 @@ select_insn:
|
|||
out:
|
||||
CONT;
|
||||
}
|
||||
/* JMP */
|
||||
JMP_JA:
|
||||
insn += insn->off;
|
||||
CONT;
|
||||
JMP_JEQ_X:
|
||||
if (DST == SRC) {
|
||||
insn += insn->off;
|
||||
CONT_JMP;
|
||||
}
|
||||
CONT;
|
||||
JMP_JEQ_K:
|
||||
if (DST == IMM) {
|
||||
insn += insn->off;
|
||||
CONT_JMP;
|
||||
}
|
||||
CONT;
|
||||
JMP_JNE_X:
|
||||
if (DST != SRC) {
|
||||
insn += insn->off;
|
||||
CONT_JMP;
|
||||
}
|
||||
CONT;
|
||||
JMP_JNE_K:
|
||||
if (DST != IMM) {
|
||||
insn += insn->off;
|
||||
CONT_JMP;
|
||||
}
|
||||
CONT;
|
||||
JMP_JGT_X:
|
||||
if (DST > SRC) {
|
||||
insn += insn->off;
|
||||
CONT_JMP;
|
||||
}
|
||||
CONT;
|
||||
JMP_JGT_K:
|
||||
if (DST > IMM) {
|
||||
insn += insn->off;
|
||||
CONT_JMP;
|
||||
}
|
||||
CONT;
|
||||
JMP_JLT_X:
|
||||
if (DST < SRC) {
|
||||
insn += insn->off;
|
||||
CONT_JMP;
|
||||
}
|
||||
CONT;
|
||||
JMP_JLT_K:
|
||||
if (DST < IMM) {
|
||||
insn += insn->off;
|
||||
CONT_JMP;
|
||||
}
|
||||
CONT;
|
||||
JMP_JGE_X:
|
||||
if (DST >= SRC) {
|
||||
insn += insn->off;
|
||||
CONT_JMP;
|
||||
}
|
||||
CONT;
|
||||
JMP_JGE_K:
|
||||
if (DST >= IMM) {
|
||||
insn += insn->off;
|
||||
CONT_JMP;
|
||||
}
|
||||
CONT;
|
||||
JMP_JLE_X:
|
||||
if (DST <= SRC) {
|
||||
insn += insn->off;
|
||||
CONT_JMP;
|
||||
}
|
||||
CONT;
|
||||
JMP_JLE_K:
|
||||
if (DST <= IMM) {
|
||||
insn += insn->off;
|
||||
CONT_JMP;
|
||||
}
|
||||
CONT;
|
||||
JMP_JSGT_X:
|
||||
if (((s64) DST) > ((s64) SRC)) {
|
||||
insn += insn->off;
|
||||
CONT_JMP;
|
||||
}
|
||||
CONT;
|
||||
JMP_JSGT_K:
|
||||
if (((s64) DST) > ((s64) IMM)) {
|
||||
insn += insn->off;
|
||||
CONT_JMP;
|
||||
}
|
||||
CONT;
|
||||
JMP_JSLT_X:
|
||||
if (((s64) DST) < ((s64) SRC)) {
|
||||
insn += insn->off;
|
||||
CONT_JMP;
|
||||
}
|
||||
CONT;
|
||||
JMP_JSLT_K:
|
||||
if (((s64) DST) < ((s64) IMM)) {
|
||||
insn += insn->off;
|
||||
CONT_JMP;
|
||||
}
|
||||
CONT;
|
||||
JMP_JSGE_X:
|
||||
if (((s64) DST) >= ((s64) SRC)) {
|
||||
insn += insn->off;
|
||||
CONT_JMP;
|
||||
}
|
||||
CONT;
|
||||
JMP_JSGE_K:
|
||||
if (((s64) DST) >= ((s64) IMM)) {
|
||||
insn += insn->off;
|
||||
CONT_JMP;
|
||||
}
|
||||
CONT;
|
||||
JMP_JSLE_X:
|
||||
if (((s64) DST) <= ((s64) SRC)) {
|
||||
insn += insn->off;
|
||||
CONT_JMP;
|
||||
}
|
||||
CONT;
|
||||
JMP_JSLE_K:
|
||||
if (((s64) DST) <= ((s64) IMM)) {
|
||||
insn += insn->off;
|
||||
CONT_JMP;
|
||||
}
|
||||
CONT;
|
||||
JMP_JSET_X:
|
||||
if (DST & SRC) {
|
||||
insn += insn->off;
|
||||
CONT_JMP;
|
||||
}
|
||||
CONT;
|
||||
JMP_JSET_K:
|
||||
if (DST & IMM) {
|
||||
insn += insn->off;
|
||||
CONT_JMP;
|
||||
}
|
||||
CONT;
|
||||
JMP_EXIT:
|
||||
return BPF_R0;
|
||||
|
||||
/* JMP */
|
||||
#define COND_JMP(SIGN, OPCODE, CMP_OP) \
|
||||
JMP_##OPCODE##_X: \
|
||||
if ((SIGN##64) DST CMP_OP (SIGN##64) SRC) { \
|
||||
insn += insn->off; \
|
||||
CONT_JMP; \
|
||||
} \
|
||||
CONT; \
|
||||
JMP32_##OPCODE##_X: \
|
||||
if ((SIGN##32) DST CMP_OP (SIGN##32) SRC) { \
|
||||
insn += insn->off; \
|
||||
CONT_JMP; \
|
||||
} \
|
||||
CONT; \
|
||||
JMP_##OPCODE##_K: \
|
||||
if ((SIGN##64) DST CMP_OP (SIGN##64) IMM) { \
|
||||
insn += insn->off; \
|
||||
CONT_JMP; \
|
||||
} \
|
||||
CONT; \
|
||||
JMP32_##OPCODE##_K: \
|
||||
if ((SIGN##32) DST CMP_OP (SIGN##32) IMM) { \
|
||||
insn += insn->off; \
|
||||
CONT_JMP; \
|
||||
} \
|
||||
CONT;
|
||||
COND_JMP(u, JEQ, ==)
|
||||
COND_JMP(u, JNE, !=)
|
||||
COND_JMP(u, JGT, >)
|
||||
COND_JMP(u, JLT, <)
|
||||
COND_JMP(u, JGE, >=)
|
||||
COND_JMP(u, JLE, <=)
|
||||
COND_JMP(u, JSET, &)
|
||||
COND_JMP(s, JSGT, >)
|
||||
COND_JMP(s, JSLT, <)
|
||||
COND_JMP(s, JSGE, >=)
|
||||
COND_JMP(s, JSLE, <=)
|
||||
#undef COND_JMP
|
||||
/* STX and ST and LDX*/
|
||||
#define LDST(SIZEOP, SIZE) \
|
||||
STX_MEM_##SIZEOP: \
|
||||
|
|
|
@ -67,7 +67,7 @@ const char *const bpf_class_string[8] = {
|
|||
[BPF_STX] = "stx",
|
||||
[BPF_ALU] = "alu",
|
||||
[BPF_JMP] = "jmp",
|
||||
[BPF_RET] = "BUG",
|
||||
[BPF_JMP32] = "jmp32",
|
||||
[BPF_ALU64] = "alu64",
|
||||
};
|
||||
|
||||
|
@ -136,23 +136,22 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs,
|
|||
else
|
||||
print_bpf_end_insn(verbose, cbs->private_data, insn);
|
||||
} else if (BPF_OP(insn->code) == BPF_NEG) {
|
||||
verbose(cbs->private_data, "(%02x) r%d = %s-r%d\n",
|
||||
insn->code, insn->dst_reg,
|
||||
class == BPF_ALU ? "(u32) " : "",
|
||||
verbose(cbs->private_data, "(%02x) %c%d = -%c%d\n",
|
||||
insn->code, class == BPF_ALU ? 'w' : 'r',
|
||||
insn->dst_reg, class == BPF_ALU ? 'w' : 'r',
|
||||
insn->dst_reg);
|
||||
} else if (BPF_SRC(insn->code) == BPF_X) {
|
||||
verbose(cbs->private_data, "(%02x) %sr%d %s %sr%d\n",
|
||||
insn->code, class == BPF_ALU ? "(u32) " : "",
|
||||
verbose(cbs->private_data, "(%02x) %c%d %s %c%d\n",
|
||||
insn->code, class == BPF_ALU ? 'w' : 'r',
|
||||
insn->dst_reg,
|
||||
bpf_alu_string[BPF_OP(insn->code) >> 4],
|
||||
class == BPF_ALU ? "(u32) " : "",
|
||||
class == BPF_ALU ? 'w' : 'r',
|
||||
insn->src_reg);
|
||||
} else {
|
||||
verbose(cbs->private_data, "(%02x) %sr%d %s %s%d\n",
|
||||
insn->code, class == BPF_ALU ? "(u32) " : "",
|
||||
verbose(cbs->private_data, "(%02x) %c%d %s %d\n",
|
||||
insn->code, class == BPF_ALU ? 'w' : 'r',
|
||||
insn->dst_reg,
|
||||
bpf_alu_string[BPF_OP(insn->code) >> 4],
|
||||
class == BPF_ALU ? "(u32) " : "",
|
||||
insn->imm);
|
||||
}
|
||||
} else if (class == BPF_STX) {
|
||||
|
@ -220,7 +219,7 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs,
|
|||
verbose(cbs->private_data, "BUG_ld_%02x\n", insn->code);
|
||||
return;
|
||||
}
|
||||
} else if (class == BPF_JMP) {
|
||||
} else if (class == BPF_JMP32 || class == BPF_JMP) {
|
||||
u8 opcode = BPF_OP(insn->code);
|
||||
|
||||
if (opcode == BPF_CALL) {
|
||||
|
@ -244,13 +243,18 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs,
|
|||
} else if (insn->code == (BPF_JMP | BPF_EXIT)) {
|
||||
verbose(cbs->private_data, "(%02x) exit\n", insn->code);
|
||||
} else if (BPF_SRC(insn->code) == BPF_X) {
|
||||
verbose(cbs->private_data, "(%02x) if r%d %s r%d goto pc%+d\n",
|
||||
insn->code, insn->dst_reg,
|
||||
verbose(cbs->private_data,
|
||||
"(%02x) if %c%d %s %c%d goto pc%+d\n",
|
||||
insn->code, class == BPF_JMP32 ? 'w' : 'r',
|
||||
insn->dst_reg,
|
||||
bpf_jmp_string[BPF_OP(insn->code) >> 4],
|
||||
class == BPF_JMP32 ? 'w' : 'r',
|
||||
insn->src_reg, insn->off);
|
||||
} else {
|
||||
verbose(cbs->private_data, "(%02x) if r%d %s 0x%x goto pc%+d\n",
|
||||
insn->code, insn->dst_reg,
|
||||
verbose(cbs->private_data,
|
||||
"(%02x) if %c%d %s 0x%x goto pc%+d\n",
|
||||
insn->code, class == BPF_JMP32 ? 'w' : 'r',
|
||||
insn->dst_reg,
|
||||
bpf_jmp_string[BPF_OP(insn->code) >> 4],
|
||||
insn->imm, insn->off);
|
||||
}
|
||||
|
|
|
@ -173,6 +173,41 @@ int bpf_prog_offload_finalize(struct bpf_verifier_env *env)
|
|||
return ret;
|
||||
}
|
||||
|
||||
void
|
||||
bpf_prog_offload_replace_insn(struct bpf_verifier_env *env, u32 off,
|
||||
struct bpf_insn *insn)
|
||||
{
|
||||
const struct bpf_prog_offload_ops *ops;
|
||||
struct bpf_prog_offload *offload;
|
||||
int ret = -EOPNOTSUPP;
|
||||
|
||||
down_read(&bpf_devs_lock);
|
||||
offload = env->prog->aux->offload;
|
||||
if (offload) {
|
||||
ops = offload->offdev->ops;
|
||||
if (!offload->opt_failed && ops->replace_insn)
|
||||
ret = ops->replace_insn(env, off, insn);
|
||||
offload->opt_failed |= ret;
|
||||
}
|
||||
up_read(&bpf_devs_lock);
|
||||
}
|
||||
|
||||
void
|
||||
bpf_prog_offload_remove_insns(struct bpf_verifier_env *env, u32 off, u32 cnt)
|
||||
{
|
||||
struct bpf_prog_offload *offload;
|
||||
int ret = -EOPNOTSUPP;
|
||||
|
||||
down_read(&bpf_devs_lock);
|
||||
offload = env->prog->aux->offload;
|
||||
if (offload) {
|
||||
if (!offload->opt_failed && offload->offdev->ops->remove_insns)
|
||||
ret = offload->offdev->ops->remove_insns(env, off, cnt);
|
||||
offload->opt_failed |= ret;
|
||||
}
|
||||
up_read(&bpf_devs_lock);
|
||||
}
|
||||
|
||||
static void __bpf_prog_offload_destroy(struct bpf_prog *prog)
|
||||
{
|
||||
struct bpf_prog_offload *offload = prog->aux->offload;
|
||||
|
|
|
@ -1095,7 +1095,7 @@ static int check_subprogs(struct bpf_verifier_env *env)
|
|||
for (i = 0; i < insn_cnt; i++) {
|
||||
u8 code = insn[i].code;
|
||||
|
||||
if (BPF_CLASS(code) != BPF_JMP)
|
||||
if (BPF_CLASS(code) != BPF_JMP && BPF_CLASS(code) != BPF_JMP32)
|
||||
goto next;
|
||||
if (BPF_OP(code) == BPF_EXIT || BPF_OP(code) == BPF_CALL)
|
||||
goto next;
|
||||
|
@ -4031,11 +4031,50 @@ static void find_good_pkt_pointers(struct bpf_verifier_state *vstate,
|
|||
* 0 - branch will not be taken and fall-through to next insn
|
||||
* -1 - unknown. Example: "if (reg < 5)" is unknown when register value range [0,10]
|
||||
*/
|
||||
static int is_branch_taken(struct bpf_reg_state *reg, u64 val, u8 opcode)
|
||||
static int is_branch_taken(struct bpf_reg_state *reg, u64 val, u8 opcode,
|
||||
bool is_jmp32)
|
||||
{
|
||||
struct bpf_reg_state reg_lo;
|
||||
s64 sval;
|
||||
|
||||
if (__is_pointer_value(false, reg))
|
||||
return -1;
|
||||
|
||||
if (is_jmp32) {
|
||||
reg_lo = *reg;
|
||||
reg = ®_lo;
|
||||
/* For JMP32, only low 32 bits are compared, coerce_reg_to_size
|
||||
* could truncate high bits and update umin/umax according to
|
||||
* information of low bits.
|
||||
*/
|
||||
coerce_reg_to_size(reg, 4);
|
||||
/* smin/smax need special handling. For example, after coerce,
|
||||
* if smin_value is 0x00000000ffffffffLL, the value is -1 when
|
||||
* used as operand to JMP32. It is a negative number from s32's
|
||||
* point of view, while it is a positive number when seen as
|
||||
* s64. The smin/smax are kept as s64, therefore, when used with
|
||||
* JMP32, they need to be transformed into s32, then sign
|
||||
* extended back to s64.
|
||||
*
|
||||
* Also, smin/smax were copied from umin/umax. If umin/umax has
|
||||
* different sign bit, then min/max relationship doesn't
|
||||
* maintain after casting into s32, for this case, set smin/smax
|
||||
* to safest range.
|
||||
*/
|
||||
if ((reg->umax_value ^ reg->umin_value) &
|
||||
(1ULL << 31)) {
|
||||
reg->smin_value = S32_MIN;
|
||||
reg->smax_value = S32_MAX;
|
||||
}
|
||||
reg->smin_value = (s64)(s32)reg->smin_value;
|
||||
reg->smax_value = (s64)(s32)reg->smax_value;
|
||||
|
||||
val = (u32)val;
|
||||
sval = (s64)(s32)val;
|
||||
} else {
|
||||
sval = (s64)val;
|
||||
}
|
||||
|
||||
switch (opcode) {
|
||||
case BPF_JEQ:
|
||||
if (tnum_is_const(reg->var_off))
|
||||
|
@ -4058,9 +4097,9 @@ static int is_branch_taken(struct bpf_reg_state *reg, u64 val, u8 opcode)
|
|||
return 0;
|
||||
break;
|
||||
case BPF_JSGT:
|
||||
if (reg->smin_value > (s64)val)
|
||||
if (reg->smin_value > sval)
|
||||
return 1;
|
||||
else if (reg->smax_value < (s64)val)
|
||||
else if (reg->smax_value < sval)
|
||||
return 0;
|
||||
break;
|
||||
case BPF_JLT:
|
||||
|
@ -4070,9 +4109,9 @@ static int is_branch_taken(struct bpf_reg_state *reg, u64 val, u8 opcode)
|
|||
return 0;
|
||||
break;
|
||||
case BPF_JSLT:
|
||||
if (reg->smax_value < (s64)val)
|
||||
if (reg->smax_value < sval)
|
||||
return 1;
|
||||
else if (reg->smin_value >= (s64)val)
|
||||
else if (reg->smin_value >= sval)
|
||||
return 0;
|
||||
break;
|
||||
case BPF_JGE:
|
||||
|
@ -4082,9 +4121,9 @@ static int is_branch_taken(struct bpf_reg_state *reg, u64 val, u8 opcode)
|
|||
return 0;
|
||||
break;
|
||||
case BPF_JSGE:
|
||||
if (reg->smin_value >= (s64)val)
|
||||
if (reg->smin_value >= sval)
|
||||
return 1;
|
||||
else if (reg->smax_value < (s64)val)
|
||||
else if (reg->smax_value < sval)
|
||||
return 0;
|
||||
break;
|
||||
case BPF_JLE:
|
||||
|
@ -4094,9 +4133,9 @@ static int is_branch_taken(struct bpf_reg_state *reg, u64 val, u8 opcode)
|
|||
return 0;
|
||||
break;
|
||||
case BPF_JSLE:
|
||||
if (reg->smax_value <= (s64)val)
|
||||
if (reg->smax_value <= sval)
|
||||
return 1;
|
||||
else if (reg->smin_value > (s64)val)
|
||||
else if (reg->smin_value > sval)
|
||||
return 0;
|
||||
break;
|
||||
}
|
||||
|
@ -4104,6 +4143,29 @@ static int is_branch_taken(struct bpf_reg_state *reg, u64 val, u8 opcode)
|
|||
return -1;
|
||||
}
|
||||
|
||||
/* Generate min value of the high 32-bit from TNUM info. */
|
||||
static u64 gen_hi_min(struct tnum var)
|
||||
{
|
||||
return var.value & ~0xffffffffULL;
|
||||
}
|
||||
|
||||
/* Generate max value of the high 32-bit from TNUM info. */
|
||||
static u64 gen_hi_max(struct tnum var)
|
||||
{
|
||||
return (var.value | var.mask) & ~0xffffffffULL;
|
||||
}
|
||||
|
||||
/* Return true if VAL is compared with a s64 sign extended from s32, and they
|
||||
* are with the same signedness.
|
||||
*/
|
||||
static bool cmp_val_with_extended_s64(s64 sval, struct bpf_reg_state *reg)
|
||||
{
|
||||
return ((s32)sval >= 0 &&
|
||||
reg->smin_value >= 0 && reg->smax_value <= S32_MAX) ||
|
||||
((s32)sval < 0 &&
|
||||
reg->smax_value <= 0 && reg->smin_value >= S32_MIN);
|
||||
}
|
||||
|
||||
/* Adjusts the register min/max values in the case that the dst_reg is the
|
||||
* variable register that we are working on, and src_reg is a constant or we're
|
||||
* simply doing a BPF_K check.
|
||||
|
@ -4111,8 +4173,10 @@ static int is_branch_taken(struct bpf_reg_state *reg, u64 val, u8 opcode)
|
|||
*/
|
||||
static void reg_set_min_max(struct bpf_reg_state *true_reg,
|
||||
struct bpf_reg_state *false_reg, u64 val,
|
||||
u8 opcode)
|
||||
u8 opcode, bool is_jmp32)
|
||||
{
|
||||
s64 sval;
|
||||
|
||||
/* If the dst_reg is a pointer, we can't learn anything about its
|
||||
* variable offset from the compare (unless src_reg were a pointer into
|
||||
* the same object, but we don't bother with that.
|
||||
|
@ -4122,19 +4186,31 @@ static void reg_set_min_max(struct bpf_reg_state *true_reg,
|
|||
if (__is_pointer_value(false, false_reg))
|
||||
return;
|
||||
|
||||
val = is_jmp32 ? (u32)val : val;
|
||||
sval = is_jmp32 ? (s64)(s32)val : (s64)val;
|
||||
|
||||
switch (opcode) {
|
||||
case BPF_JEQ:
|
||||
/* If this is false then we know nothing Jon Snow, but if it is
|
||||
* true then we know for sure.
|
||||
*/
|
||||
__mark_reg_known(true_reg, val);
|
||||
break;
|
||||
case BPF_JNE:
|
||||
/* If this is true we know nothing Jon Snow, but if it is false
|
||||
* we know the value for sure;
|
||||
{
|
||||
struct bpf_reg_state *reg =
|
||||
opcode == BPF_JEQ ? true_reg : false_reg;
|
||||
|
||||
/* For BPF_JEQ, if this is false we know nothing Jon Snow, but
|
||||
* if it is true we know the value for sure. Likewise for
|
||||
* BPF_JNE.
|
||||
*/
|
||||
__mark_reg_known(false_reg, val);
|
||||
if (is_jmp32) {
|
||||
u64 old_v = reg->var_off.value;
|
||||
u64 hi_mask = ~0xffffffffULL;
|
||||
|
||||
reg->var_off.value = (old_v & hi_mask) | val;
|
||||
reg->var_off.mask &= hi_mask;
|
||||
} else {
|
||||
__mark_reg_known(reg, val);
|
||||
}
|
||||
break;
|
||||
}
|
||||
case BPF_JSET:
|
||||
false_reg->var_off = tnum_and(false_reg->var_off,
|
||||
tnum_const(~val));
|
||||
|
@ -4142,38 +4218,61 @@ static void reg_set_min_max(struct bpf_reg_state *true_reg,
|
|||
true_reg->var_off = tnum_or(true_reg->var_off,
|
||||
tnum_const(val));
|
||||
break;
|
||||
case BPF_JGT:
|
||||
false_reg->umax_value = min(false_reg->umax_value, val);
|
||||
true_reg->umin_value = max(true_reg->umin_value, val + 1);
|
||||
break;
|
||||
case BPF_JSGT:
|
||||
false_reg->smax_value = min_t(s64, false_reg->smax_value, val);
|
||||
true_reg->smin_value = max_t(s64, true_reg->smin_value, val + 1);
|
||||
break;
|
||||
case BPF_JLT:
|
||||
false_reg->umin_value = max(false_reg->umin_value, val);
|
||||
true_reg->umax_value = min(true_reg->umax_value, val - 1);
|
||||
break;
|
||||
case BPF_JSLT:
|
||||
false_reg->smin_value = max_t(s64, false_reg->smin_value, val);
|
||||
true_reg->smax_value = min_t(s64, true_reg->smax_value, val - 1);
|
||||
break;
|
||||
case BPF_JGE:
|
||||
false_reg->umax_value = min(false_reg->umax_value, val - 1);
|
||||
true_reg->umin_value = max(true_reg->umin_value, val);
|
||||
case BPF_JGT:
|
||||
{
|
||||
u64 false_umax = opcode == BPF_JGT ? val : val - 1;
|
||||
u64 true_umin = opcode == BPF_JGT ? val + 1 : val;
|
||||
|
||||
if (is_jmp32) {
|
||||
false_umax += gen_hi_max(false_reg->var_off);
|
||||
true_umin += gen_hi_min(true_reg->var_off);
|
||||
}
|
||||
false_reg->umax_value = min(false_reg->umax_value, false_umax);
|
||||
true_reg->umin_value = max(true_reg->umin_value, true_umin);
|
||||
break;
|
||||
}
|
||||
case BPF_JSGE:
|
||||
false_reg->smax_value = min_t(s64, false_reg->smax_value, val - 1);
|
||||
true_reg->smin_value = max_t(s64, true_reg->smin_value, val);
|
||||
case BPF_JSGT:
|
||||
{
|
||||
s64 false_smax = opcode == BPF_JSGT ? sval : sval - 1;
|
||||
s64 true_smin = opcode == BPF_JSGT ? sval + 1 : sval;
|
||||
|
||||
/* If the full s64 was not sign-extended from s32 then don't
|
||||
* deduct further info.
|
||||
*/
|
||||
if (is_jmp32 && !cmp_val_with_extended_s64(sval, false_reg))
|
||||
break;
|
||||
false_reg->smax_value = min(false_reg->smax_value, false_smax);
|
||||
true_reg->smin_value = max(true_reg->smin_value, true_smin);
|
||||
break;
|
||||
}
|
||||
case BPF_JLE:
|
||||
false_reg->umin_value = max(false_reg->umin_value, val + 1);
|
||||
true_reg->umax_value = min(true_reg->umax_value, val);
|
||||
case BPF_JLT:
|
||||
{
|
||||
u64 false_umin = opcode == BPF_JLT ? val : val + 1;
|
||||
u64 true_umax = opcode == BPF_JLT ? val - 1 : val;
|
||||
|
||||
if (is_jmp32) {
|
||||
false_umin += gen_hi_min(false_reg->var_off);
|
||||
true_umax += gen_hi_max(true_reg->var_off);
|
||||
}
|
||||
false_reg->umin_value = max(false_reg->umin_value, false_umin);
|
||||
true_reg->umax_value = min(true_reg->umax_value, true_umax);
|
||||
break;
|
||||
}
|
||||
case BPF_JSLE:
|
||||
false_reg->smin_value = max_t(s64, false_reg->smin_value, val + 1);
|
||||
true_reg->smax_value = min_t(s64, true_reg->smax_value, val);
|
||||
case BPF_JSLT:
|
||||
{
|
||||
s64 false_smin = opcode == BPF_JSLT ? sval : sval + 1;
|
||||
s64 true_smax = opcode == BPF_JSLT ? sval - 1 : sval;
|
||||
|
||||
if (is_jmp32 && !cmp_val_with_extended_s64(sval, false_reg))
|
||||
break;
|
||||
false_reg->smin_value = max(false_reg->smin_value, false_smin);
|
||||
true_reg->smax_value = min(true_reg->smax_value, true_smax);
|
||||
break;
|
||||
}
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
@ -4196,24 +4295,34 @@ static void reg_set_min_max(struct bpf_reg_state *true_reg,
|
|||
*/
|
||||
static void reg_set_min_max_inv(struct bpf_reg_state *true_reg,
|
||||
struct bpf_reg_state *false_reg, u64 val,
|
||||
u8 opcode)
|
||||
u8 opcode, bool is_jmp32)
|
||||
{
|
||||
s64 sval;
|
||||
|
||||
if (__is_pointer_value(false, false_reg))
|
||||
return;
|
||||
|
||||
val = is_jmp32 ? (u32)val : val;
|
||||
sval = is_jmp32 ? (s64)(s32)val : (s64)val;
|
||||
|
||||
switch (opcode) {
|
||||
case BPF_JEQ:
|
||||
/* If this is false then we know nothing Jon Snow, but if it is
|
||||
* true then we know for sure.
|
||||
*/
|
||||
__mark_reg_known(true_reg, val);
|
||||
break;
|
||||
case BPF_JNE:
|
||||
/* If this is true we know nothing Jon Snow, but if it is false
|
||||
* we know the value for sure;
|
||||
*/
|
||||
__mark_reg_known(false_reg, val);
|
||||
{
|
||||
struct bpf_reg_state *reg =
|
||||
opcode == BPF_JEQ ? true_reg : false_reg;
|
||||
|
||||
if (is_jmp32) {
|
||||
u64 old_v = reg->var_off.value;
|
||||
u64 hi_mask = ~0xffffffffULL;
|
||||
|
||||
reg->var_off.value = (old_v & hi_mask) | val;
|
||||
reg->var_off.mask &= hi_mask;
|
||||
} else {
|
||||
__mark_reg_known(reg, val);
|
||||
}
|
||||
break;
|
||||
}
|
||||
case BPF_JSET:
|
||||
false_reg->var_off = tnum_and(false_reg->var_off,
|
||||
tnum_const(~val));
|
||||
|
@ -4221,38 +4330,58 @@ static void reg_set_min_max_inv(struct bpf_reg_state *true_reg,
|
|||
true_reg->var_off = tnum_or(true_reg->var_off,
|
||||
tnum_const(val));
|
||||
break;
|
||||
case BPF_JGT:
|
||||
true_reg->umax_value = min(true_reg->umax_value, val - 1);
|
||||
false_reg->umin_value = max(false_reg->umin_value, val);
|
||||
break;
|
||||
case BPF_JSGT:
|
||||
true_reg->smax_value = min_t(s64, true_reg->smax_value, val - 1);
|
||||
false_reg->smin_value = max_t(s64, false_reg->smin_value, val);
|
||||
break;
|
||||
case BPF_JLT:
|
||||
true_reg->umin_value = max(true_reg->umin_value, val + 1);
|
||||
false_reg->umax_value = min(false_reg->umax_value, val);
|
||||
break;
|
||||
case BPF_JSLT:
|
||||
true_reg->smin_value = max_t(s64, true_reg->smin_value, val + 1);
|
||||
false_reg->smax_value = min_t(s64, false_reg->smax_value, val);
|
||||
break;
|
||||
case BPF_JGE:
|
||||
true_reg->umax_value = min(true_reg->umax_value, val);
|
||||
false_reg->umin_value = max(false_reg->umin_value, val + 1);
|
||||
case BPF_JGT:
|
||||
{
|
||||
u64 false_umin = opcode == BPF_JGT ? val : val + 1;
|
||||
u64 true_umax = opcode == BPF_JGT ? val - 1 : val;
|
||||
|
||||
if (is_jmp32) {
|
||||
false_umin += gen_hi_min(false_reg->var_off);
|
||||
true_umax += gen_hi_max(true_reg->var_off);
|
||||
}
|
||||
false_reg->umin_value = max(false_reg->umin_value, false_umin);
|
||||
true_reg->umax_value = min(true_reg->umax_value, true_umax);
|
||||
break;
|
||||
}
|
||||
case BPF_JSGE:
|
||||
true_reg->smax_value = min_t(s64, true_reg->smax_value, val);
|
||||
false_reg->smin_value = max_t(s64, false_reg->smin_value, val + 1);
|
||||
case BPF_JSGT:
|
||||
{
|
||||
s64 false_smin = opcode == BPF_JSGT ? sval : sval + 1;
|
||||
s64 true_smax = opcode == BPF_JSGT ? sval - 1 : sval;
|
||||
|
||||
if (is_jmp32 && !cmp_val_with_extended_s64(sval, false_reg))
|
||||
break;
|
||||
false_reg->smin_value = max(false_reg->smin_value, false_smin);
|
||||
true_reg->smax_value = min(true_reg->smax_value, true_smax);
|
||||
break;
|
||||
}
|
||||
case BPF_JLE:
|
||||
true_reg->umin_value = max(true_reg->umin_value, val);
|
||||
false_reg->umax_value = min(false_reg->umax_value, val - 1);
|
||||
case BPF_JLT:
|
||||
{
|
||||
u64 false_umax = opcode == BPF_JLT ? val : val - 1;
|
||||
u64 true_umin = opcode == BPF_JLT ? val + 1 : val;
|
||||
|
||||
if (is_jmp32) {
|
||||
false_umax += gen_hi_max(false_reg->var_off);
|
||||
true_umin += gen_hi_min(true_reg->var_off);
|
||||
}
|
||||
false_reg->umax_value = min(false_reg->umax_value, false_umax);
|
||||
true_reg->umin_value = max(true_reg->umin_value, true_umin);
|
||||
break;
|
||||
}
|
||||
case BPF_JSLE:
|
||||
true_reg->smin_value = max_t(s64, true_reg->smin_value, val);
|
||||
false_reg->smax_value = min_t(s64, false_reg->smax_value, val - 1);
|
||||
case BPF_JSLT:
|
||||
{
|
||||
s64 false_smax = opcode == BPF_JSLT ? sval : sval - 1;
|
||||
s64 true_smin = opcode == BPF_JSLT ? sval + 1 : sval;
|
||||
|
||||
if (is_jmp32 && !cmp_val_with_extended_s64(sval, false_reg))
|
||||
break;
|
||||
false_reg->smax_value = min(false_reg->smax_value, false_smax);
|
||||
true_reg->smin_value = max(true_reg->smin_value, true_smin);
|
||||
break;
|
||||
}
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
@ -4390,6 +4519,10 @@ static bool try_match_pkt_pointers(const struct bpf_insn *insn,
|
|||
if (BPF_SRC(insn->code) != BPF_X)
|
||||
return false;
|
||||
|
||||
/* Pointers are always 64-bit. */
|
||||
if (BPF_CLASS(insn->code) == BPF_JMP32)
|
||||
return false;
|
||||
|
||||
switch (BPF_OP(insn->code)) {
|
||||
case BPF_JGT:
|
||||
if ((dst_reg->type == PTR_TO_PACKET &&
|
||||
|
@ -4482,16 +4615,18 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
|
|||
struct bpf_reg_state *regs = this_branch->frame[this_branch->curframe]->regs;
|
||||
struct bpf_reg_state *dst_reg, *other_branch_regs;
|
||||
u8 opcode = BPF_OP(insn->code);
|
||||
bool is_jmp32;
|
||||
int err;
|
||||
|
||||
if (opcode > BPF_JSLE) {
|
||||
verbose(env, "invalid BPF_JMP opcode %x\n", opcode);
|
||||
/* Only conditional jumps are expected to reach here. */
|
||||
if (opcode == BPF_JA || opcode > BPF_JSLE) {
|
||||
verbose(env, "invalid BPF_JMP/JMP32 opcode %x\n", opcode);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (BPF_SRC(insn->code) == BPF_X) {
|
||||
if (insn->imm != 0) {
|
||||
verbose(env, "BPF_JMP uses reserved fields\n");
|
||||
verbose(env, "BPF_JMP/JMP32 uses reserved fields\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
|
@ -4507,7 +4642,7 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
|
|||
}
|
||||
} else {
|
||||
if (insn->src_reg != BPF_REG_0) {
|
||||
verbose(env, "BPF_JMP uses reserved fields\n");
|
||||
verbose(env, "BPF_JMP/JMP32 uses reserved fields\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
}
|
||||
|
@ -4518,9 +4653,11 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
|
|||
return err;
|
||||
|
||||
dst_reg = ®s[insn->dst_reg];
|
||||
is_jmp32 = BPF_CLASS(insn->code) == BPF_JMP32;
|
||||
|
||||
if (BPF_SRC(insn->code) == BPF_K) {
|
||||
int pred = is_branch_taken(dst_reg, insn->imm, opcode);
|
||||
int pred = is_branch_taken(dst_reg, insn->imm, opcode,
|
||||
is_jmp32);
|
||||
|
||||
if (pred == 1) {
|
||||
/* only follow the goto, ignore fall-through */
|
||||
|
@ -4548,30 +4685,51 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
|
|||
* comparable.
|
||||
*/
|
||||
if (BPF_SRC(insn->code) == BPF_X) {
|
||||
struct bpf_reg_state *src_reg = ®s[insn->src_reg];
|
||||
struct bpf_reg_state lo_reg0 = *dst_reg;
|
||||
struct bpf_reg_state lo_reg1 = *src_reg;
|
||||
struct bpf_reg_state *src_lo, *dst_lo;
|
||||
|
||||
dst_lo = &lo_reg0;
|
||||
src_lo = &lo_reg1;
|
||||
coerce_reg_to_size(dst_lo, 4);
|
||||
coerce_reg_to_size(src_lo, 4);
|
||||
|
||||
if (dst_reg->type == SCALAR_VALUE &&
|
||||
regs[insn->src_reg].type == SCALAR_VALUE) {
|
||||
if (tnum_is_const(regs[insn->src_reg].var_off))
|
||||
src_reg->type == SCALAR_VALUE) {
|
||||
if (tnum_is_const(src_reg->var_off) ||
|
||||
(is_jmp32 && tnum_is_const(src_lo->var_off)))
|
||||
reg_set_min_max(&other_branch_regs[insn->dst_reg],
|
||||
dst_reg, regs[insn->src_reg].var_off.value,
|
||||
opcode);
|
||||
else if (tnum_is_const(dst_reg->var_off))
|
||||
dst_reg,
|
||||
is_jmp32
|
||||
? src_lo->var_off.value
|
||||
: src_reg->var_off.value,
|
||||
opcode, is_jmp32);
|
||||
else if (tnum_is_const(dst_reg->var_off) ||
|
||||
(is_jmp32 && tnum_is_const(dst_lo->var_off)))
|
||||
reg_set_min_max_inv(&other_branch_regs[insn->src_reg],
|
||||
®s[insn->src_reg],
|
||||
dst_reg->var_off.value, opcode);
|
||||
else if (opcode == BPF_JEQ || opcode == BPF_JNE)
|
||||
src_reg,
|
||||
is_jmp32
|
||||
? dst_lo->var_off.value
|
||||
: dst_reg->var_off.value,
|
||||
opcode, is_jmp32);
|
||||
else if (!is_jmp32 &&
|
||||
(opcode == BPF_JEQ || opcode == BPF_JNE))
|
||||
/* Comparing for equality, we can combine knowledge */
|
||||
reg_combine_min_max(&other_branch_regs[insn->src_reg],
|
||||
&other_branch_regs[insn->dst_reg],
|
||||
®s[insn->src_reg],
|
||||
®s[insn->dst_reg], opcode);
|
||||
src_reg, dst_reg, opcode);
|
||||
}
|
||||
} else if (dst_reg->type == SCALAR_VALUE) {
|
||||
reg_set_min_max(&other_branch_regs[insn->dst_reg],
|
||||
dst_reg, insn->imm, opcode);
|
||||
dst_reg, insn->imm, opcode, is_jmp32);
|
||||
}
|
||||
|
||||
/* detect if R == 0 where R is returned from bpf_map_lookup_elem() */
|
||||
if (BPF_SRC(insn->code) == BPF_K &&
|
||||
/* detect if R == 0 where R is returned from bpf_map_lookup_elem().
|
||||
* NOTE: these optimizations below are related with pointer comparison
|
||||
* which will never be JMP32.
|
||||
*/
|
||||
if (!is_jmp32 && BPF_SRC(insn->code) == BPF_K &&
|
||||
insn->imm == 0 && (opcode == BPF_JEQ || opcode == BPF_JNE) &&
|
||||
reg_type_may_be_null(dst_reg->type)) {
|
||||
/* Mark all identical registers in each branch as either
|
||||
|
@ -4900,7 +5058,8 @@ peek_stack:
|
|||
goto check_state;
|
||||
t = insn_stack[cur_stack - 1];
|
||||
|
||||
if (BPF_CLASS(insns[t].code) == BPF_JMP) {
|
||||
if (BPF_CLASS(insns[t].code) == BPF_JMP ||
|
||||
BPF_CLASS(insns[t].code) == BPF_JMP32) {
|
||||
u8 opcode = BPF_OP(insns[t].code);
|
||||
|
||||
if (opcode == BPF_EXIT) {
|
||||
|
@ -4997,13 +5156,14 @@ static int check_btf_func(struct bpf_verifier_env *env,
|
|||
const union bpf_attr *attr,
|
||||
union bpf_attr __user *uattr)
|
||||
{
|
||||
u32 i, nfuncs, urec_size, min_size, prev_offset;
|
||||
u32 i, nfuncs, urec_size, min_size;
|
||||
u32 krec_size = sizeof(struct bpf_func_info);
|
||||
struct bpf_func_info *krecord;
|
||||
const struct btf_type *type;
|
||||
struct bpf_prog *prog;
|
||||
const struct btf *btf;
|
||||
void __user *urecord;
|
||||
u32 prev_offset = 0;
|
||||
int ret = 0;
|
||||
|
||||
nfuncs = attr->func_info_cnt;
|
||||
|
@ -6055,7 +6215,7 @@ static int do_check(struct bpf_verifier_env *env)
|
|||
if (err)
|
||||
return err;
|
||||
|
||||
} else if (class == BPF_JMP) {
|
||||
} else if (class == BPF_JMP || class == BPF_JMP32) {
|
||||
u8 opcode = BPF_OP(insn->code);
|
||||
|
||||
if (opcode == BPF_CALL) {
|
||||
|
@ -6063,7 +6223,8 @@ static int do_check(struct bpf_verifier_env *env)
|
|||
insn->off != 0 ||
|
||||
(insn->src_reg != BPF_REG_0 &&
|
||||
insn->src_reg != BPF_PSEUDO_CALL) ||
|
||||
insn->dst_reg != BPF_REG_0) {
|
||||
insn->dst_reg != BPF_REG_0 ||
|
||||
class == BPF_JMP32) {
|
||||
verbose(env, "BPF_CALL uses reserved fields\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
@ -6079,7 +6240,8 @@ static int do_check(struct bpf_verifier_env *env)
|
|||
if (BPF_SRC(insn->code) != BPF_K ||
|
||||
insn->imm != 0 ||
|
||||
insn->src_reg != BPF_REG_0 ||
|
||||
insn->dst_reg != BPF_REG_0) {
|
||||
insn->dst_reg != BPF_REG_0 ||
|
||||
class == BPF_JMP32) {
|
||||
verbose(env, "BPF_JA uses reserved fields\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
@ -6091,7 +6253,8 @@ static int do_check(struct bpf_verifier_env *env)
|
|||
if (BPF_SRC(insn->code) != BPF_K ||
|
||||
insn->imm != 0 ||
|
||||
insn->src_reg != BPF_REG_0 ||
|
||||
insn->dst_reg != BPF_REG_0) {
|
||||
insn->dst_reg != BPF_REG_0 ||
|
||||
class == BPF_JMP32) {
|
||||
verbose(env, "BPF_EXIT uses reserved fields\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
@ -6431,6 +6594,153 @@ static struct bpf_prog *bpf_patch_insn_data(struct bpf_verifier_env *env, u32 of
|
|||
return new_prog;
|
||||
}
|
||||
|
||||
static int adjust_subprog_starts_after_remove(struct bpf_verifier_env *env,
|
||||
u32 off, u32 cnt)
|
||||
{
|
||||
int i, j;
|
||||
|
||||
/* find first prog starting at or after off (first to remove) */
|
||||
for (i = 0; i < env->subprog_cnt; i++)
|
||||
if (env->subprog_info[i].start >= off)
|
||||
break;
|
||||
/* find first prog starting at or after off + cnt (first to stay) */
|
||||
for (j = i; j < env->subprog_cnt; j++)
|
||||
if (env->subprog_info[j].start >= off + cnt)
|
||||
break;
|
||||
/* if j doesn't start exactly at off + cnt, we are just removing
|
||||
* the front of previous prog
|
||||
*/
|
||||
if (env->subprog_info[j].start != off + cnt)
|
||||
j--;
|
||||
|
||||
if (j > i) {
|
||||
struct bpf_prog_aux *aux = env->prog->aux;
|
||||
int move;
|
||||
|
||||
/* move fake 'exit' subprog as well */
|
||||
move = env->subprog_cnt + 1 - j;
|
||||
|
||||
memmove(env->subprog_info + i,
|
||||
env->subprog_info + j,
|
||||
sizeof(*env->subprog_info) * move);
|
||||
env->subprog_cnt -= j - i;
|
||||
|
||||
/* remove func_info */
|
||||
if (aux->func_info) {
|
||||
move = aux->func_info_cnt - j;
|
||||
|
||||
memmove(aux->func_info + i,
|
||||
aux->func_info + j,
|
||||
sizeof(*aux->func_info) * move);
|
||||
aux->func_info_cnt -= j - i;
|
||||
/* func_info->insn_off is set after all code rewrites,
|
||||
* in adjust_btf_func() - no need to adjust
|
||||
*/
|
||||
}
|
||||
} else {
|
||||
/* convert i from "first prog to remove" to "first to adjust" */
|
||||
if (env->subprog_info[i].start == off)
|
||||
i++;
|
||||
}
|
||||
|
||||
/* update fake 'exit' subprog as well */
|
||||
for (; i <= env->subprog_cnt; i++)
|
||||
env->subprog_info[i].start -= cnt;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int bpf_adj_linfo_after_remove(struct bpf_verifier_env *env, u32 off,
|
||||
u32 cnt)
|
||||
{
|
||||
struct bpf_prog *prog = env->prog;
|
||||
u32 i, l_off, l_cnt, nr_linfo;
|
||||
struct bpf_line_info *linfo;
|
||||
|
||||
nr_linfo = prog->aux->nr_linfo;
|
||||
if (!nr_linfo)
|
||||
return 0;
|
||||
|
||||
linfo = prog->aux->linfo;
|
||||
|
||||
/* find first line info to remove, count lines to be removed */
|
||||
for (i = 0; i < nr_linfo; i++)
|
||||
if (linfo[i].insn_off >= off)
|
||||
break;
|
||||
|
||||
l_off = i;
|
||||
l_cnt = 0;
|
||||
for (; i < nr_linfo; i++)
|
||||
if (linfo[i].insn_off < off + cnt)
|
||||
l_cnt++;
|
||||
else
|
||||
break;
|
||||
|
||||
/* First live insn doesn't match first live linfo, it needs to "inherit"
|
||||
* last removed linfo. prog is already modified, so prog->len == off
|
||||
* means no live instructions after (tail of the program was removed).
|
||||
*/
|
||||
if (prog->len != off && l_cnt &&
|
||||
(i == nr_linfo || linfo[i].insn_off != off + cnt)) {
|
||||
l_cnt--;
|
||||
linfo[--i].insn_off = off + cnt;
|
||||
}
|
||||
|
||||
/* remove the line info which refer to the removed instructions */
|
||||
if (l_cnt) {
|
||||
memmove(linfo + l_off, linfo + i,
|
||||
sizeof(*linfo) * (nr_linfo - i));
|
||||
|
||||
prog->aux->nr_linfo -= l_cnt;
|
||||
nr_linfo = prog->aux->nr_linfo;
|
||||
}
|
||||
|
||||
/* pull all linfo[i].insn_off >= off + cnt in by cnt */
|
||||
for (i = l_off; i < nr_linfo; i++)
|
||||
linfo[i].insn_off -= cnt;
|
||||
|
||||
/* fix up all subprogs (incl. 'exit') which start >= off */
|
||||
for (i = 0; i <= env->subprog_cnt; i++)
|
||||
if (env->subprog_info[i].linfo_idx > l_off) {
|
||||
/* program may have started in the removed region but
|
||||
* may not be fully removed
|
||||
*/
|
||||
if (env->subprog_info[i].linfo_idx >= l_off + l_cnt)
|
||||
env->subprog_info[i].linfo_idx -= l_cnt;
|
||||
else
|
||||
env->subprog_info[i].linfo_idx = l_off;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int verifier_remove_insns(struct bpf_verifier_env *env, u32 off, u32 cnt)
|
||||
{
|
||||
struct bpf_insn_aux_data *aux_data = env->insn_aux_data;
|
||||
unsigned int orig_prog_len = env->prog->len;
|
||||
int err;
|
||||
|
||||
if (bpf_prog_is_dev_bound(env->prog->aux))
|
||||
bpf_prog_offload_remove_insns(env, off, cnt);
|
||||
|
||||
err = bpf_remove_insns(env->prog, off, cnt);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
err = adjust_subprog_starts_after_remove(env, off, cnt);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
err = bpf_adj_linfo_after_remove(env, off, cnt);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
memmove(aux_data + off, aux_data + off + cnt,
|
||||
sizeof(*aux_data) * (orig_prog_len - off - cnt));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* The verifier does more data flow analysis than llvm and will not
|
||||
* explore branches that are dead at run time. Malicious programs can
|
||||
* have dead code too. Therefore replace all dead at-run-time code
|
||||
|
@ -6457,6 +6767,91 @@ static void sanitize_dead_code(struct bpf_verifier_env *env)
|
|||
}
|
||||
}
|
||||
|
||||
static bool insn_is_cond_jump(u8 code)
|
||||
{
|
||||
u8 op;
|
||||
|
||||
if (BPF_CLASS(code) == BPF_JMP32)
|
||||
return true;
|
||||
|
||||
if (BPF_CLASS(code) != BPF_JMP)
|
||||
return false;
|
||||
|
||||
op = BPF_OP(code);
|
||||
return op != BPF_JA && op != BPF_EXIT && op != BPF_CALL;
|
||||
}
|
||||
|
||||
static void opt_hard_wire_dead_code_branches(struct bpf_verifier_env *env)
|
||||
{
|
||||
struct bpf_insn_aux_data *aux_data = env->insn_aux_data;
|
||||
struct bpf_insn ja = BPF_JMP_IMM(BPF_JA, 0, 0, 0);
|
||||
struct bpf_insn *insn = env->prog->insnsi;
|
||||
const int insn_cnt = env->prog->len;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < insn_cnt; i++, insn++) {
|
||||
if (!insn_is_cond_jump(insn->code))
|
||||
continue;
|
||||
|
||||
if (!aux_data[i + 1].seen)
|
||||
ja.off = insn->off;
|
||||
else if (!aux_data[i + 1 + insn->off].seen)
|
||||
ja.off = 0;
|
||||
else
|
||||
continue;
|
||||
|
||||
if (bpf_prog_is_dev_bound(env->prog->aux))
|
||||
bpf_prog_offload_replace_insn(env, i, &ja);
|
||||
|
||||
memcpy(insn, &ja, sizeof(ja));
|
||||
}
|
||||
}
|
||||
|
||||
static int opt_remove_dead_code(struct bpf_verifier_env *env)
|
||||
{
|
||||
struct bpf_insn_aux_data *aux_data = env->insn_aux_data;
|
||||
int insn_cnt = env->prog->len;
|
||||
int i, err;
|
||||
|
||||
for (i = 0; i < insn_cnt; i++) {
|
||||
int j;
|
||||
|
||||
j = 0;
|
||||
while (i + j < insn_cnt && !aux_data[i + j].seen)
|
||||
j++;
|
||||
if (!j)
|
||||
continue;
|
||||
|
||||
err = verifier_remove_insns(env, i, j);
|
||||
if (err)
|
||||
return err;
|
||||
insn_cnt = env->prog->len;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int opt_remove_nops(struct bpf_verifier_env *env)
|
||||
{
|
||||
const struct bpf_insn ja = BPF_JMP_IMM(BPF_JA, 0, 0, 0);
|
||||
struct bpf_insn *insn = env->prog->insnsi;
|
||||
int insn_cnt = env->prog->len;
|
||||
int i, err;
|
||||
|
||||
for (i = 0; i < insn_cnt; i++) {
|
||||
if (memcmp(&insn[i], &ja, sizeof(ja)))
|
||||
continue;
|
||||
|
||||
err = verifier_remove_insns(env, i, 1);
|
||||
if (err)
|
||||
return err;
|
||||
insn_cnt--;
|
||||
i--;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* convert load instructions that access fields of a context type into a
|
||||
* sequence of instructions that access fields of the underlying structure:
|
||||
* struct __sk_buff -> struct sk_buff
|
||||
|
@ -7147,7 +7542,8 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr,
|
|||
{
|
||||
struct bpf_verifier_env *env;
|
||||
struct bpf_verifier_log *log;
|
||||
int ret = -EINVAL;
|
||||
int i, len, ret = -EINVAL;
|
||||
bool is_priv;
|
||||
|
||||
/* no program is valid */
|
||||
if (ARRAY_SIZE(bpf_verifier_ops) == 0)
|
||||
|
@ -7161,12 +7557,14 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr,
|
|||
return -ENOMEM;
|
||||
log = &env->log;
|
||||
|
||||
len = (*prog)->len;
|
||||
env->insn_aux_data =
|
||||
vzalloc(array_size(sizeof(struct bpf_insn_aux_data),
|
||||
(*prog)->len));
|
||||
vzalloc(array_size(sizeof(struct bpf_insn_aux_data), len));
|
||||
ret = -ENOMEM;
|
||||
if (!env->insn_aux_data)
|
||||
goto err_free_env;
|
||||
for (i = 0; i < len; i++)
|
||||
env->insn_aux_data[i].orig_idx = i;
|
||||
env->prog = *prog;
|
||||
env->ops = bpf_verifier_ops[env->prog->type];
|
||||
|
||||
|
@ -7194,6 +7592,9 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr,
|
|||
if (attr->prog_flags & BPF_F_ANY_ALIGNMENT)
|
||||
env->strict_alignment = false;
|
||||
|
||||
is_priv = capable(CAP_SYS_ADMIN);
|
||||
env->allow_ptr_leaks = is_priv;
|
||||
|
||||
ret = replace_map_fd_with_map_ptr(env);
|
||||
if (ret < 0)
|
||||
goto skip_full_check;
|
||||
|
@ -7211,8 +7612,6 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr,
|
|||
if (!env->explored_states)
|
||||
goto skip_full_check;
|
||||
|
||||
env->allow_ptr_leaks = capable(CAP_SYS_ADMIN);
|
||||
|
||||
ret = check_subprogs(env);
|
||||
if (ret < 0)
|
||||
goto skip_full_check;
|
||||
|
@ -7242,8 +7641,17 @@ skip_full_check:
|
|||
ret = check_max_stack_depth(env);
|
||||
|
||||
/* instruction rewrites happen after this point */
|
||||
if (ret == 0)
|
||||
sanitize_dead_code(env);
|
||||
if (is_priv) {
|
||||
if (ret == 0)
|
||||
opt_hard_wire_dead_code_branches(env);
|
||||
if (ret == 0)
|
||||
ret = opt_remove_dead_code(env);
|
||||
if (ret == 0)
|
||||
ret = opt_remove_nops(env);
|
||||
} else {
|
||||
if (ret == 0)
|
||||
sanitize_dead_code(env);
|
||||
}
|
||||
|
||||
if (ret == 0)
|
||||
/* program is valid, convert *(u32*)(ctx + off) accesses */
|
||||
|
|
|
@ -240,3 +240,85 @@ out:
|
|||
kfree(data);
|
||||
return ret;
|
||||
}
|
||||
|
||||
int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
|
||||
const union bpf_attr *kattr,
|
||||
union bpf_attr __user *uattr)
|
||||
{
|
||||
u32 size = kattr->test.data_size_in;
|
||||
u32 repeat = kattr->test.repeat;
|
||||
struct bpf_flow_keys flow_keys;
|
||||
u64 time_start, time_spent = 0;
|
||||
struct bpf_skb_data_end *cb;
|
||||
u32 retval, duration;
|
||||
struct sk_buff *skb;
|
||||
struct sock *sk;
|
||||
void *data;
|
||||
int ret;
|
||||
u32 i;
|
||||
|
||||
if (prog->type != BPF_PROG_TYPE_FLOW_DISSECTOR)
|
||||
return -EINVAL;
|
||||
|
||||
data = bpf_test_init(kattr, size, NET_SKB_PAD + NET_IP_ALIGN,
|
||||
SKB_DATA_ALIGN(sizeof(struct skb_shared_info)));
|
||||
if (IS_ERR(data))
|
||||
return PTR_ERR(data);
|
||||
|
||||
sk = kzalloc(sizeof(*sk), GFP_USER);
|
||||
if (!sk) {
|
||||
kfree(data);
|
||||
return -ENOMEM;
|
||||
}
|
||||
sock_net_set(sk, current->nsproxy->net_ns);
|
||||
sock_init_data(NULL, sk);
|
||||
|
||||
skb = build_skb(data, 0);
|
||||
if (!skb) {
|
||||
kfree(data);
|
||||
kfree(sk);
|
||||
return -ENOMEM;
|
||||
}
|
||||
skb->sk = sk;
|
||||
|
||||
skb_reserve(skb, NET_SKB_PAD + NET_IP_ALIGN);
|
||||
__skb_put(skb, size);
|
||||
skb->protocol = eth_type_trans(skb,
|
||||
current->nsproxy->net_ns->loopback_dev);
|
||||
skb_reset_network_header(skb);
|
||||
|
||||
cb = (struct bpf_skb_data_end *)skb->cb;
|
||||
cb->qdisc_cb.flow_keys = &flow_keys;
|
||||
|
||||
if (!repeat)
|
||||
repeat = 1;
|
||||
|
||||
time_start = ktime_get_ns();
|
||||
for (i = 0; i < repeat; i++) {
|
||||
preempt_disable();
|
||||
rcu_read_lock();
|
||||
retval = __skb_flow_bpf_dissect(prog, skb,
|
||||
&flow_keys_dissector,
|
||||
&flow_keys);
|
||||
rcu_read_unlock();
|
||||
preempt_enable();
|
||||
|
||||
if (need_resched()) {
|
||||
if (signal_pending(current))
|
||||
break;
|
||||
time_spent += ktime_get_ns() - time_start;
|
||||
cond_resched();
|
||||
time_start = ktime_get_ns();
|
||||
}
|
||||
}
|
||||
time_spent += ktime_get_ns() - time_start;
|
||||
do_div(time_spent, repeat);
|
||||
duration = time_spent > U32_MAX ? U32_MAX : (u32)time_spent;
|
||||
|
||||
ret = bpf_test_finish(kattr, uattr, &flow_keys, sizeof(flow_keys),
|
||||
retval, duration);
|
||||
|
||||
kfree_skb(skb);
|
||||
kfree(sk);
|
||||
return ret;
|
||||
}
|
||||
|
|
|
@ -6708,6 +6708,27 @@ static u32 bpf_convert_ctx_access(enum bpf_access_type type,
|
|||
target_size));
|
||||
break;
|
||||
|
||||
case offsetof(struct __sk_buff, gso_segs):
|
||||
/* si->dst_reg = skb_shinfo(SKB); */
|
||||
#ifdef NET_SKBUFF_DATA_USES_OFFSET
|
||||
*insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct sk_buff, head),
|
||||
si->dst_reg, si->src_reg,
|
||||
offsetof(struct sk_buff, head));
|
||||
*insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct sk_buff, end),
|
||||
BPF_REG_AX, si->src_reg,
|
||||
offsetof(struct sk_buff, end));
|
||||
*insn++ = BPF_ALU64_REG(BPF_ADD, si->dst_reg, BPF_REG_AX);
|
||||
#else
|
||||
*insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct sk_buff, end),
|
||||
si->dst_reg, si->src_reg,
|
||||
offsetof(struct sk_buff, end));
|
||||
#endif
|
||||
*insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct skb_shared_info, gso_segs),
|
||||
si->dst_reg, si->dst_reg,
|
||||
bpf_target_off(struct skb_shared_info,
|
||||
gso_segs, 2,
|
||||
target_size));
|
||||
break;
|
||||
case offsetof(struct __sk_buff, wire_len):
|
||||
BUILD_BUG_ON(FIELD_SIZEOF(struct qdisc_skb_cb, pkt_len) != 4);
|
||||
|
||||
|
@ -7698,6 +7719,7 @@ const struct bpf_verifier_ops flow_dissector_verifier_ops = {
|
|||
};
|
||||
|
||||
const struct bpf_prog_ops flow_dissector_prog_ops = {
|
||||
.test_run = bpf_prog_test_run_flow_dissector,
|
||||
};
|
||||
|
||||
int sk_detach_filter(struct sock *sk)
|
||||
|
|
|
@ -683,6 +683,46 @@ static void __skb_flow_bpf_to_target(const struct bpf_flow_keys *flow_keys,
|
|||
}
|
||||
}
|
||||
|
||||
bool __skb_flow_bpf_dissect(struct bpf_prog *prog,
|
||||
const struct sk_buff *skb,
|
||||
struct flow_dissector *flow_dissector,
|
||||
struct bpf_flow_keys *flow_keys)
|
||||
{
|
||||
struct bpf_skb_data_end cb_saved;
|
||||
struct bpf_skb_data_end *cb;
|
||||
u32 result;
|
||||
|
||||
/* Note that even though the const qualifier is discarded
|
||||
* throughout the execution of the BPF program, all changes(the
|
||||
* control block) are reverted after the BPF program returns.
|
||||
* Therefore, __skb_flow_dissect does not alter the skb.
|
||||
*/
|
||||
|
||||
cb = (struct bpf_skb_data_end *)skb->cb;
|
||||
|
||||
/* Save Control Block */
|
||||
memcpy(&cb_saved, cb, sizeof(cb_saved));
|
||||
memset(cb, 0, sizeof(*cb));
|
||||
|
||||
/* Pass parameters to the BPF program */
|
||||
memset(flow_keys, 0, sizeof(*flow_keys));
|
||||
cb->qdisc_cb.flow_keys = flow_keys;
|
||||
flow_keys->nhoff = skb_network_offset(skb);
|
||||
flow_keys->thoff = flow_keys->nhoff;
|
||||
|
||||
bpf_compute_data_pointers((struct sk_buff *)skb);
|
||||
result = BPF_PROG_RUN(prog, skb);
|
||||
|
||||
/* Restore state */
|
||||
memcpy(cb, &cb_saved, sizeof(cb_saved));
|
||||
|
||||
flow_keys->nhoff = clamp_t(u16, flow_keys->nhoff, 0, skb->len);
|
||||
flow_keys->thoff = clamp_t(u16, flow_keys->thoff,
|
||||
flow_keys->nhoff, skb->len);
|
||||
|
||||
return result == BPF_OK;
|
||||
}
|
||||
|
||||
/**
|
||||
* __skb_flow_dissect - extract the flow_keys struct and return it
|
||||
* @skb: sk_buff to extract the flow from, can be NULL if the rest are specified
|
||||
|
@ -714,7 +754,6 @@ bool __skb_flow_dissect(const struct sk_buff *skb,
|
|||
struct flow_dissector_key_vlan *key_vlan;
|
||||
enum flow_dissect_ret fdret;
|
||||
enum flow_dissector_key_id dissector_vlan = FLOW_DISSECTOR_KEY_MAX;
|
||||
struct bpf_prog *attached = NULL;
|
||||
int num_hdrs = 0;
|
||||
u8 ip_proto = 0;
|
||||
bool ret;
|
||||
|
@ -754,53 +793,30 @@ bool __skb_flow_dissect(const struct sk_buff *skb,
|
|||
FLOW_DISSECTOR_KEY_BASIC,
|
||||
target_container);
|
||||
|
||||
rcu_read_lock();
|
||||
if (skb) {
|
||||
struct bpf_flow_keys flow_keys;
|
||||
struct bpf_prog *attached = NULL;
|
||||
|
||||
rcu_read_lock();
|
||||
|
||||
if (skb->dev)
|
||||
attached = rcu_dereference(dev_net(skb->dev)->flow_dissector_prog);
|
||||
else if (skb->sk)
|
||||
attached = rcu_dereference(sock_net(skb->sk)->flow_dissector_prog);
|
||||
else
|
||||
WARN_ON_ONCE(1);
|
||||
}
|
||||
if (attached) {
|
||||
/* Note that even though the const qualifier is discarded
|
||||
* throughout the execution of the BPF program, all changes(the
|
||||
* control block) are reverted after the BPF program returns.
|
||||
* Therefore, __skb_flow_dissect does not alter the skb.
|
||||
*/
|
||||
struct bpf_flow_keys flow_keys = {};
|
||||
struct bpf_skb_data_end cb_saved;
|
||||
struct bpf_skb_data_end *cb;
|
||||
u32 result;
|
||||
|
||||
cb = (struct bpf_skb_data_end *)skb->cb;
|
||||
|
||||
/* Save Control Block */
|
||||
memcpy(&cb_saved, cb, sizeof(cb_saved));
|
||||
memset(cb, 0, sizeof(cb_saved));
|
||||
|
||||
/* Pass parameters to the BPF program */
|
||||
cb->qdisc_cb.flow_keys = &flow_keys;
|
||||
flow_keys.nhoff = nhoff;
|
||||
flow_keys.thoff = nhoff;
|
||||
|
||||
bpf_compute_data_pointers((struct sk_buff *)skb);
|
||||
result = BPF_PROG_RUN(attached, skb);
|
||||
|
||||
/* Restore state */
|
||||
memcpy(cb, &cb_saved, sizeof(cb_saved));
|
||||
|
||||
flow_keys.nhoff = clamp_t(u16, flow_keys.nhoff, 0, skb->len);
|
||||
flow_keys.thoff = clamp_t(u16, flow_keys.thoff,
|
||||
flow_keys.nhoff, skb->len);
|
||||
|
||||
__skb_flow_bpf_to_target(&flow_keys, flow_dissector,
|
||||
target_container);
|
||||
if (attached) {
|
||||
ret = __skb_flow_bpf_dissect(attached, skb,
|
||||
flow_dissector,
|
||||
&flow_keys);
|
||||
__skb_flow_bpf_to_target(&flow_keys, flow_dissector,
|
||||
target_container);
|
||||
rcu_read_unlock();
|
||||
return ret;
|
||||
}
|
||||
rcu_read_unlock();
|
||||
return result == BPF_OK;
|
||||
}
|
||||
rcu_read_unlock();
|
||||
|
||||
if (dissector_uses_key(flow_dissector,
|
||||
FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
|
||||
|
|
|
@ -5,3 +5,11 @@ config XDP_SOCKETS
|
|||
help
|
||||
XDP sockets allows a channel between XDP programs and
|
||||
userspace applications.
|
||||
|
||||
config XDP_SOCKETS_DIAG
|
||||
tristate "XDP sockets: monitoring interface"
|
||||
depends on XDP_SOCKETS
|
||||
default n
|
||||
help
|
||||
Support for PF_XDP sockets monitoring interface used by the ss tool.
|
||||
If unsure, say Y.
|
||||
|
|
|
@ -1 +1,2 @@
|
|||
obj-$(CONFIG_XDP_SOCKETS) += xsk.o xdp_umem.o xsk_queue.o
|
||||
obj-$(CONFIG_XDP_SOCKETS_DIAG) += xsk_diag.o
|
||||
|
|
|
@ -13,12 +13,15 @@
|
|||
#include <linux/mm.h>
|
||||
#include <linux/netdevice.h>
|
||||
#include <linux/rtnetlink.h>
|
||||
#include <linux/idr.h>
|
||||
|
||||
#include "xdp_umem.h"
|
||||
#include "xsk_queue.h"
|
||||
|
||||
#define XDP_UMEM_MIN_CHUNK_SIZE 2048
|
||||
|
||||
static DEFINE_IDA(umem_ida);
|
||||
|
||||
void xdp_add_sk_umem(struct xdp_umem *umem, struct xdp_sock *xs)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
@ -194,6 +197,8 @@ static void xdp_umem_release(struct xdp_umem *umem)
|
|||
|
||||
xdp_umem_clear_dev(umem);
|
||||
|
||||
ida_simple_remove(&umem_ida, umem->id);
|
||||
|
||||
if (umem->fq) {
|
||||
xskq_destroy(umem->fq);
|
||||
umem->fq = NULL;
|
||||
|
@ -400,8 +405,16 @@ struct xdp_umem *xdp_umem_create(struct xdp_umem_reg *mr)
|
|||
if (!umem)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
err = ida_simple_get(&umem_ida, 0, 0, GFP_KERNEL);
|
||||
if (err < 0) {
|
||||
kfree(umem);
|
||||
return ERR_PTR(err);
|
||||
}
|
||||
umem->id = err;
|
||||
|
||||
err = xdp_umem_reg(umem, mr);
|
||||
if (err) {
|
||||
ida_simple_remove(&umem_ida, umem->id);
|
||||
kfree(umem);
|
||||
return ERR_PTR(err);
|
||||
}
|
||||
|
|
|
@ -27,14 +27,10 @@
|
|||
|
||||
#include "xsk_queue.h"
|
||||
#include "xdp_umem.h"
|
||||
#include "xsk.h"
|
||||
|
||||
#define TX_BATCH_SIZE 16
|
||||
|
||||
static struct xdp_sock *xdp_sk(struct sock *sk)
|
||||
{
|
||||
return (struct xdp_sock *)sk;
|
||||
}
|
||||
|
||||
bool xsk_is_setup_for_bpf_map(struct xdp_sock *xs)
|
||||
{
|
||||
return READ_ONCE(xs->rx) && READ_ONCE(xs->umem) &&
|
||||
|
@ -350,6 +346,10 @@ static int xsk_release(struct socket *sock)
|
|||
|
||||
net = sock_net(sk);
|
||||
|
||||
mutex_lock(&net->xdp.lock);
|
||||
sk_del_node_init_rcu(sk);
|
||||
mutex_unlock(&net->xdp.lock);
|
||||
|
||||
local_bh_disable();
|
||||
sock_prot_inuse_add(net, sk->sk_prot, -1);
|
||||
local_bh_enable();
|
||||
|
@ -746,6 +746,10 @@ static int xsk_create(struct net *net, struct socket *sock, int protocol,
|
|||
mutex_init(&xs->mutex);
|
||||
spin_lock_init(&xs->tx_completion_lock);
|
||||
|
||||
mutex_lock(&net->xdp.lock);
|
||||
sk_add_node_rcu(sk, &net->xdp.list);
|
||||
mutex_unlock(&net->xdp.lock);
|
||||
|
||||
local_bh_disable();
|
||||
sock_prot_inuse_add(net, &xsk_proto, 1);
|
||||
local_bh_enable();
|
||||
|
@ -759,6 +763,23 @@ static const struct net_proto_family xsk_family_ops = {
|
|||
.owner = THIS_MODULE,
|
||||
};
|
||||
|
||||
static int __net_init xsk_net_init(struct net *net)
|
||||
{
|
||||
mutex_init(&net->xdp.lock);
|
||||
INIT_HLIST_HEAD(&net->xdp.list);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void __net_exit xsk_net_exit(struct net *net)
|
||||
{
|
||||
WARN_ON_ONCE(!hlist_empty(&net->xdp.list));
|
||||
}
|
||||
|
||||
static struct pernet_operations xsk_net_ops = {
|
||||
.init = xsk_net_init,
|
||||
.exit = xsk_net_exit,
|
||||
};
|
||||
|
||||
static int __init xsk_init(void)
|
||||
{
|
||||
int err;
|
||||
|
@ -771,8 +792,13 @@ static int __init xsk_init(void)
|
|||
if (err)
|
||||
goto out_proto;
|
||||
|
||||
err = register_pernet_subsys(&xsk_net_ops);
|
||||
if (err)
|
||||
goto out_sk;
|
||||
return 0;
|
||||
|
||||
out_sk:
|
||||
sock_unregister(PF_XDP);
|
||||
out_proto:
|
||||
proto_unregister(&xsk_proto);
|
||||
out:
|
||||
|
|
|
@ -0,0 +1,12 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
/* Copyright(c) 2019 Intel Corporation. */
|
||||
|
||||
#ifndef XSK_H_
|
||||
#define XSK_H_
|
||||
|
||||
static inline struct xdp_sock *xdp_sk(struct sock *sk)
|
||||
{
|
||||
return (struct xdp_sock *)sk;
|
||||
}
|
||||
|
||||
#endif /* XSK_H_ */
|
|
@ -0,0 +1,191 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/* XDP sockets monitoring support
|
||||
*
|
||||
* Copyright(c) 2019 Intel Corporation.
|
||||
*
|
||||
* Author: Björn Töpel <bjorn.topel@intel.com>
|
||||
*/
|
||||
|
||||
#include <linux/module.h>
|
||||
#include <net/xdp_sock.h>
|
||||
#include <linux/xdp_diag.h>
|
||||
#include <linux/sock_diag.h>
|
||||
|
||||
#include "xsk_queue.h"
|
||||
#include "xsk.h"
|
||||
|
||||
static int xsk_diag_put_info(const struct xdp_sock *xs, struct sk_buff *nlskb)
|
||||
{
|
||||
struct xdp_diag_info di = {};
|
||||
|
||||
di.ifindex = xs->dev ? xs->dev->ifindex : 0;
|
||||
di.queue_id = xs->queue_id;
|
||||
return nla_put(nlskb, XDP_DIAG_INFO, sizeof(di), &di);
|
||||
}
|
||||
|
||||
static int xsk_diag_put_ring(const struct xsk_queue *queue, int nl_type,
|
||||
struct sk_buff *nlskb)
|
||||
{
|
||||
struct xdp_diag_ring dr = {};
|
||||
|
||||
dr.entries = queue->nentries;
|
||||
return nla_put(nlskb, nl_type, sizeof(dr), &dr);
|
||||
}
|
||||
|
||||
static int xsk_diag_put_rings_cfg(const struct xdp_sock *xs,
|
||||
struct sk_buff *nlskb)
|
||||
{
|
||||
int err = 0;
|
||||
|
||||
if (xs->rx)
|
||||
err = xsk_diag_put_ring(xs->rx, XDP_DIAG_RX_RING, nlskb);
|
||||
if (!err && xs->tx)
|
||||
err = xsk_diag_put_ring(xs->tx, XDP_DIAG_TX_RING, nlskb);
|
||||
return err;
|
||||
}
|
||||
|
||||
static int xsk_diag_put_umem(const struct xdp_sock *xs, struct sk_buff *nlskb)
|
||||
{
|
||||
struct xdp_umem *umem = xs->umem;
|
||||
struct xdp_diag_umem du = {};
|
||||
int err;
|
||||
|
||||
if (!umem)
|
||||
return 0;
|
||||
|
||||
du.id = umem->id;
|
||||
du.size = umem->size;
|
||||
du.num_pages = umem->npgs;
|
||||
du.chunk_size = (__u32)(~umem->chunk_mask + 1);
|
||||
du.headroom = umem->headroom;
|
||||
du.ifindex = umem->dev ? umem->dev->ifindex : 0;
|
||||
du.queue_id = umem->queue_id;
|
||||
du.flags = 0;
|
||||
if (umem->zc)
|
||||
du.flags |= XDP_DU_F_ZEROCOPY;
|
||||
du.refs = refcount_read(&umem->users);
|
||||
|
||||
err = nla_put(nlskb, XDP_DIAG_UMEM, sizeof(du), &du);
|
||||
|
||||
if (!err && umem->fq)
|
||||
err = xsk_diag_put_ring(xs->tx, XDP_DIAG_UMEM_FILL_RING, nlskb);
|
||||
if (!err && umem->cq) {
|
||||
err = xsk_diag_put_ring(xs->tx, XDP_DIAG_UMEM_COMPLETION_RING,
|
||||
nlskb);
|
||||
}
|
||||
return err;
|
||||
}
|
||||
|
||||
static int xsk_diag_fill(struct sock *sk, struct sk_buff *nlskb,
|
||||
struct xdp_diag_req *req,
|
||||
struct user_namespace *user_ns,
|
||||
u32 portid, u32 seq, u32 flags, int sk_ino)
|
||||
{
|
||||
struct xdp_sock *xs = xdp_sk(sk);
|
||||
struct xdp_diag_msg *msg;
|
||||
struct nlmsghdr *nlh;
|
||||
|
||||
nlh = nlmsg_put(nlskb, portid, seq, SOCK_DIAG_BY_FAMILY, sizeof(*msg),
|
||||
flags);
|
||||
if (!nlh)
|
||||
return -EMSGSIZE;
|
||||
|
||||
msg = nlmsg_data(nlh);
|
||||
memset(msg, 0, sizeof(*msg));
|
||||
msg->xdiag_family = AF_XDP;
|
||||
msg->xdiag_type = sk->sk_type;
|
||||
msg->xdiag_ino = sk_ino;
|
||||
sock_diag_save_cookie(sk, msg->xdiag_cookie);
|
||||
|
||||
if ((req->xdiag_show & XDP_SHOW_INFO) && xsk_diag_put_info(xs, nlskb))
|
||||
goto out_nlmsg_trim;
|
||||
|
||||
if ((req->xdiag_show & XDP_SHOW_INFO) &&
|
||||
nla_put_u32(nlskb, XDP_DIAG_UID,
|
||||
from_kuid_munged(user_ns, sock_i_uid(sk))))
|
||||
goto out_nlmsg_trim;
|
||||
|
||||
if ((req->xdiag_show & XDP_SHOW_RING_CFG) &&
|
||||
xsk_diag_put_rings_cfg(xs, nlskb))
|
||||
goto out_nlmsg_trim;
|
||||
|
||||
if ((req->xdiag_show & XDP_SHOW_UMEM) &&
|
||||
xsk_diag_put_umem(xs, nlskb))
|
||||
goto out_nlmsg_trim;
|
||||
|
||||
if ((req->xdiag_show & XDP_SHOW_MEMINFO) &&
|
||||
sock_diag_put_meminfo(sk, nlskb, XDP_DIAG_MEMINFO))
|
||||
goto out_nlmsg_trim;
|
||||
|
||||
nlmsg_end(nlskb, nlh);
|
||||
return 0;
|
||||
|
||||
out_nlmsg_trim:
|
||||
nlmsg_cancel(nlskb, nlh);
|
||||
return -EMSGSIZE;
|
||||
}
|
||||
|
||||
static int xsk_diag_dump(struct sk_buff *nlskb, struct netlink_callback *cb)
|
||||
{
|
||||
struct xdp_diag_req *req = nlmsg_data(cb->nlh);
|
||||
struct net *net = sock_net(nlskb->sk);
|
||||
int num = 0, s_num = cb->args[0];
|
||||
struct sock *sk;
|
||||
|
||||
mutex_lock(&net->xdp.lock);
|
||||
|
||||
sk_for_each(sk, &net->xdp.list) {
|
||||
if (!net_eq(sock_net(sk), net))
|
||||
continue;
|
||||
if (num++ < s_num)
|
||||
continue;
|
||||
|
||||
if (xsk_diag_fill(sk, nlskb, req,
|
||||
sk_user_ns(NETLINK_CB(cb->skb).sk),
|
||||
NETLINK_CB(cb->skb).portid,
|
||||
cb->nlh->nlmsg_seq, NLM_F_MULTI,
|
||||
sock_i_ino(sk)) < 0) {
|
||||
num--;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
mutex_unlock(&net->xdp.lock);
|
||||
cb->args[0] = num;
|
||||
return nlskb->len;
|
||||
}
|
||||
|
||||
static int xsk_diag_handler_dump(struct sk_buff *nlskb, struct nlmsghdr *hdr)
|
||||
{
|
||||
struct netlink_dump_control c = { .dump = xsk_diag_dump };
|
||||
int hdrlen = sizeof(struct xdp_diag_req);
|
||||
struct net *net = sock_net(nlskb->sk);
|
||||
|
||||
if (nlmsg_len(hdr) < hdrlen)
|
||||
return -EINVAL;
|
||||
|
||||
if (!(hdr->nlmsg_flags & NLM_F_DUMP))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
return netlink_dump_start(net->diag_nlsk, nlskb, hdr, &c);
|
||||
}
|
||||
|
||||
static const struct sock_diag_handler xsk_diag_handler = {
|
||||
.family = AF_XDP,
|
||||
.dump = xsk_diag_handler_dump,
|
||||
};
|
||||
|
||||
static int __init xsk_diag_init(void)
|
||||
{
|
||||
return sock_diag_register(&xsk_diag_handler);
|
||||
}
|
||||
|
||||
static void __exit xsk_diag_exit(void)
|
||||
{
|
||||
sock_diag_unregister(&xsk_diag_handler);
|
||||
}
|
||||
|
||||
module_init(xsk_diag_init);
|
||||
module_exit(xsk_diag_exit);
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_ALIAS_NET_PF_PROTO_TYPE(PF_NETLINK, NETLINK_SOCK_DIAG, AF_XDP);
|
|
@ -164,6 +164,16 @@ struct bpf_insn;
|
|||
.off = OFF, \
|
||||
.imm = 0 })
|
||||
|
||||
/* Like BPF_JMP_REG, but with 32-bit wide operands for comparison. */
|
||||
|
||||
#define BPF_JMP32_REG(OP, DST, SRC, OFF) \
|
||||
((struct bpf_insn) { \
|
||||
.code = BPF_JMP32 | BPF_OP(OP) | BPF_X, \
|
||||
.dst_reg = DST, \
|
||||
.src_reg = SRC, \
|
||||
.off = OFF, \
|
||||
.imm = 0 })
|
||||
|
||||
/* Conditional jumps against immediates, if (dst_reg 'op' imm32) goto pc + off16 */
|
||||
|
||||
#define BPF_JMP_IMM(OP, DST, IMM, OFF) \
|
||||
|
@ -174,6 +184,16 @@ struct bpf_insn;
|
|||
.off = OFF, \
|
||||
.imm = IMM })
|
||||
|
||||
/* Like BPF_JMP_IMM, but with 32-bit wide operands for comparison. */
|
||||
|
||||
#define BPF_JMP32_IMM(OP, DST, IMM, OFF) \
|
||||
((struct bpf_insn) { \
|
||||
.code = BPF_JMP32 | BPF_OP(OP) | BPF_K, \
|
||||
.dst_reg = DST, \
|
||||
.src_reg = 0, \
|
||||
.off = OFF, \
|
||||
.imm = IMM })
|
||||
|
||||
/* Raw code statement block */
|
||||
|
||||
#define BPF_RAW_INSN(CODE, DST, SRC, OFF, IMM) \
|
||||
|
|
|
@ -142,5 +142,6 @@ SEE ALSO
|
|||
**bpftool**\ (8),
|
||||
**bpftool-prog**\ (8),
|
||||
**bpftool-map**\ (8),
|
||||
**bpftool-feature**\ (8),
|
||||
**bpftool-net**\ (8),
|
||||
**bpftool-perf**\ (8)
|
||||
|
|
|
@ -0,0 +1,85 @@
|
|||
===============
|
||||
bpftool-feature
|
||||
===============
|
||||
-------------------------------------------------------------------------------
|
||||
tool for inspection of eBPF-related parameters for Linux kernel or net device
|
||||
-------------------------------------------------------------------------------
|
||||
|
||||
:Manual section: 8
|
||||
|
||||
SYNOPSIS
|
||||
========
|
||||
|
||||
**bpftool** [*OPTIONS*] **feature** *COMMAND*
|
||||
|
||||
*OPTIONS* := { { **-j** | **--json** } [{ **-p** | **--pretty** }] }
|
||||
|
||||
*COMMANDS* := { **probe** | **help** }
|
||||
|
||||
MAP COMMANDS
|
||||
=============
|
||||
|
||||
| **bpftool** **feature probe** [*COMPONENT*] [**macros** [**prefix** *PREFIX*]]
|
||||
| **bpftool** **feature help**
|
||||
|
|
||||
| *COMPONENT* := { **kernel** | **dev** *NAME* }
|
||||
|
||||
DESCRIPTION
|
||||
===========
|
||||
**bpftool feature probe** [**kernel**] [**macros** [**prefix** *PREFIX*]]
|
||||
Probe the running kernel and dump a number of eBPF-related
|
||||
parameters, such as availability of the **bpf()** system call,
|
||||
JIT status, eBPF program types availability, eBPF helper
|
||||
functions availability, and more.
|
||||
|
||||
If the **macros** keyword (but not the **-j** option) is
|
||||
passed, a subset of the output is dumped as a list of
|
||||
**#define** macros that are ready to be included in a C
|
||||
header file, for example. If, additionally, **prefix** is
|
||||
used to define a *PREFIX*, the provided string will be used
|
||||
as a prefix to the names of the macros: this can be used to
|
||||
avoid conflicts on macro names when including the output of
|
||||
this command as a header file.
|
||||
|
||||
Keyword **kernel** can be omitted. If no probe target is
|
||||
specified, probing the kernel is the default behaviour.
|
||||
|
||||
Note that when probed, some eBPF helpers (e.g.
|
||||
**bpf_trace_printk**\ () or **bpf_probe_write_user**\ ()) may
|
||||
print warnings to kernel logs.
|
||||
|
||||
**bpftool feature probe dev** *NAME* [**macros** [**prefix** *PREFIX*]]
|
||||
Probe network device for supported eBPF features and dump
|
||||
results to the console.
|
||||
|
||||
The two keywords **macros** and **prefix** have the same
|
||||
role as when probing the kernel.
|
||||
|
||||
**bpftool feature help**
|
||||
Print short help message.
|
||||
|
||||
OPTIONS
|
||||
=======
|
||||
-h, --help
|
||||
Print short generic help message (similar to **bpftool help**).
|
||||
|
||||
-v, --version
|
||||
Print version number (similar to **bpftool version**).
|
||||
|
||||
-j, --json
|
||||
Generate JSON output. For commands that cannot produce JSON, this
|
||||
option has no effect.
|
||||
|
||||
-p, --pretty
|
||||
Generate human-readable JSON output. Implies **-j**.
|
||||
|
||||
SEE ALSO
|
||||
========
|
||||
**bpf**\ (2),
|
||||
**bpf-helpers**\ (7),
|
||||
**bpftool**\ (8),
|
||||
**bpftool-prog**\ (8),
|
||||
**bpftool-map**\ (8),
|
||||
**bpftool-cgroup**\ (8),
|
||||
**bpftool-net**\ (8),
|
||||
**bpftool-perf**\ (8)
|
|
@ -25,12 +25,17 @@ MAP COMMANDS
|
|||
| **bpftool** **map create** *FILE* **type** *TYPE* **key** *KEY_SIZE* **value** *VALUE_SIZE* \
|
||||
| **entries** *MAX_ENTRIES* **name** *NAME* [**flags** *FLAGS*] [**dev** *NAME*]
|
||||
| **bpftool** **map dump** *MAP*
|
||||
| **bpftool** **map update** *MAP* **key** *DATA* **value** *VALUE* [*UPDATE_FLAGS*]
|
||||
| **bpftool** **map lookup** *MAP* **key** *DATA*
|
||||
| **bpftool** **map update** *MAP* [**key** *DATA*] [**value** *VALUE*] [*UPDATE_FLAGS*]
|
||||
| **bpftool** **map lookup** *MAP* [**key** *DATA*]
|
||||
| **bpftool** **map getnext** *MAP* [**key** *DATA*]
|
||||
| **bpftool** **map delete** *MAP* **key** *DATA*
|
||||
| **bpftool** **map pin** *MAP* *FILE*
|
||||
| **bpftool** **map event_pipe** *MAP* [**cpu** *N* **index** *M*]
|
||||
| **bpftool** **map peek** *MAP*
|
||||
| **bpftool** **map push** *MAP* **value** *VALUE*
|
||||
| **bpftool** **map pop** *MAP*
|
||||
| **bpftool** **map enqueue** *MAP* **value** *VALUE*
|
||||
| **bpftool** **map dequeue** *MAP*
|
||||
| **bpftool** **map help**
|
||||
|
|
||||
| *MAP* := { **id** *MAP_ID* | **pinned** *FILE* }
|
||||
|
@ -62,7 +67,7 @@ DESCRIPTION
|
|||
**bpftool map dump** *MAP*
|
||||
Dump all entries in a given *MAP*.
|
||||
|
||||
**bpftool map update** *MAP* **key** *DATA* **value** *VALUE* [*UPDATE_FLAGS*]
|
||||
**bpftool map update** *MAP* [**key** *DATA*] [**value** *VALUE*] [*UPDATE_FLAGS*]
|
||||
Update map entry for a given *KEY*.
|
||||
|
||||
*UPDATE_FLAGS* can be one of: **any** update existing entry
|
||||
|
@ -75,7 +80,7 @@ DESCRIPTION
|
|||
the bytes are parsed as decimal values, unless a "0x" prefix
|
||||
(for hexadecimal) or a "0" prefix (for octal) is provided.
|
||||
|
||||
**bpftool map lookup** *MAP* **key** *DATA*
|
||||
**bpftool map lookup** *MAP* [**key** *DATA*]
|
||||
Lookup **key** in the map.
|
||||
|
||||
**bpftool map getnext** *MAP* [**key** *DATA*]
|
||||
|
@ -107,6 +112,21 @@ DESCRIPTION
|
|||
replace any existing ring. Any other application will stop
|
||||
receiving events if it installed its rings earlier.
|
||||
|
||||
**bpftool map peek** *MAP*
|
||||
Peek next **value** in the queue or stack.
|
||||
|
||||
**bpftool map push** *MAP* **value** *VALUE*
|
||||
Push **value** onto the stack.
|
||||
|
||||
**bpftool map pop** *MAP*
|
||||
Pop and print **value** from the stack.
|
||||
|
||||
**bpftool map enqueue** *MAP* **value** *VALUE*
|
||||
Enqueue **value** into the queue.
|
||||
|
||||
**bpftool map dequeue** *MAP*
|
||||
Dequeue and print **value** from the queue.
|
||||
|
||||
**bpftool map help**
|
||||
Print short help message.
|
||||
|
||||
|
@ -236,5 +256,6 @@ SEE ALSO
|
|||
**bpftool**\ (8),
|
||||
**bpftool-prog**\ (8),
|
||||
**bpftool-cgroup**\ (8),
|
||||
**bpftool-feature**\ (8),
|
||||
**bpftool-net**\ (8),
|
||||
**bpftool-perf**\ (8)
|
||||
|
|
|
@ -142,4 +142,5 @@ SEE ALSO
|
|||
**bpftool-prog**\ (8),
|
||||
**bpftool-map**\ (8),
|
||||
**bpftool-cgroup**\ (8),
|
||||
**bpftool-feature**\ (8),
|
||||
**bpftool-perf**\ (8)
|
||||
|
|
|
@ -84,4 +84,5 @@ SEE ALSO
|
|||
**bpftool-prog**\ (8),
|
||||
**bpftool-map**\ (8),
|
||||
**bpftool-cgroup**\ (8),
|
||||
**bpftool-feature**\ (8),
|
||||
**bpftool-net**\ (8)
|
||||
|
|
|
@ -258,5 +258,6 @@ SEE ALSO
|
|||
**bpftool**\ (8),
|
||||
**bpftool-map**\ (8),
|
||||
**bpftool-cgroup**\ (8),
|
||||
**bpftool-feature**\ (8),
|
||||
**bpftool-net**\ (8),
|
||||
**bpftool-perf**\ (8)
|
||||
|
|
|
@ -72,5 +72,6 @@ SEE ALSO
|
|||
**bpftool-prog**\ (8),
|
||||
**bpftool-map**\ (8),
|
||||
**bpftool-cgroup**\ (8),
|
||||
**bpftool-feature**\ (8),
|
||||
**bpftool-net**\ (8),
|
||||
**bpftool-perf**\ (8)
|
||||
|
|
|
@ -50,14 +50,15 @@ _bpftool_get_map_ids()
|
|||
command sed -n 's/.*"id": \(.*\),$/\1/p' )" -- "$cur" ) )
|
||||
}
|
||||
|
||||
_bpftool_get_perf_map_ids()
|
||||
# Takes map type and adds matching map ids to the list of suggestions.
|
||||
_bpftool_get_map_ids_for_type()
|
||||
{
|
||||
local type="$1"
|
||||
COMPREPLY+=( $( compgen -W "$( bpftool -jp map 2>&1 | \
|
||||
command grep -C2 perf_event_array | \
|
||||
command grep -C2 "$type" | \
|
||||
command sed -n 's/.*"id": \(.*\),$/\1/p' )" -- "$cur" ) )
|
||||
}
|
||||
|
||||
|
||||
_bpftool_get_prog_ids()
|
||||
{
|
||||
COMPREPLY+=( $( compgen -W "$( bpftool -jp prog 2>&1 | \
|
||||
|
@ -99,15 +100,25 @@ _sysfs_get_netdevs()
|
|||
"$cur" ) )
|
||||
}
|
||||
|
||||
# For bpftool map update: retrieve type of the map to update.
|
||||
_bpftool_map_update_map_type()
|
||||
# Retrieve type of the map that we are operating on.
|
||||
_bpftool_map_guess_map_type()
|
||||
{
|
||||
local keyword ref
|
||||
for (( idx=3; idx < ${#words[@]}-1; idx++ )); do
|
||||
if [[ ${words[$((idx-2))]} == "update" ]]; then
|
||||
keyword=${words[$((idx-1))]}
|
||||
ref=${words[$((idx))]}
|
||||
fi
|
||||
case "${words[$((idx-2))]}" in
|
||||
lookup|update)
|
||||
keyword=${words[$((idx-1))]}
|
||||
ref=${words[$((idx))]}
|
||||
;;
|
||||
push)
|
||||
printf "stack"
|
||||
return 0
|
||||
;;
|
||||
enqueue)
|
||||
printf "queue"
|
||||
return 0
|
||||
;;
|
||||
esac
|
||||
done
|
||||
[[ -z $ref ]] && return 0
|
||||
|
||||
|
@ -119,6 +130,8 @@ _bpftool_map_update_map_type()
|
|||
|
||||
_bpftool_map_update_get_id()
|
||||
{
|
||||
local command="$1"
|
||||
|
||||
# Is it the map to update, or a map to insert into the map to update?
|
||||
# Search for "value" keyword.
|
||||
local idx value
|
||||
|
@ -128,11 +141,24 @@ _bpftool_map_update_get_id()
|
|||
break
|
||||
fi
|
||||
done
|
||||
[[ $value -eq 0 ]] && _bpftool_get_map_ids && return 0
|
||||
if [[ $value -eq 0 ]]; then
|
||||
case "$command" in
|
||||
push)
|
||||
_bpftool_get_map_ids_for_type stack
|
||||
;;
|
||||
enqueue)
|
||||
_bpftool_get_map_ids_for_type queue
|
||||
;;
|
||||
*)
|
||||
_bpftool_get_map_ids
|
||||
;;
|
||||
esac
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Id to complete is for a value. It can be either prog id or map id. This
|
||||
# depends on the type of the map to update.
|
||||
local type=$(_bpftool_map_update_map_type)
|
||||
local type=$(_bpftool_map_guess_map_type)
|
||||
case $type in
|
||||
array_of_maps|hash_of_maps)
|
||||
_bpftool_get_map_ids
|
||||
|
@ -382,14 +408,28 @@ _bpftool()
|
|||
map)
|
||||
local MAP_TYPE='id pinned'
|
||||
case $command in
|
||||
show|list|dump)
|
||||
show|list|dump|peek|pop|dequeue)
|
||||
case $prev in
|
||||
$command)
|
||||
COMPREPLY=( $( compgen -W "$MAP_TYPE" -- "$cur" ) )
|
||||
return 0
|
||||
;;
|
||||
id)
|
||||
_bpftool_get_map_ids
|
||||
case "$command" in
|
||||
peek)
|
||||
_bpftool_get_map_ids_for_type stack
|
||||
_bpftool_get_map_ids_for_type queue
|
||||
;;
|
||||
pop)
|
||||
_bpftool_get_map_ids_for_type stack
|
||||
;;
|
||||
dequeue)
|
||||
_bpftool_get_map_ids_for_type queue
|
||||
;;
|
||||
*)
|
||||
_bpftool_get_map_ids
|
||||
;;
|
||||
esac
|
||||
return 0
|
||||
;;
|
||||
*)
|
||||
|
@ -447,19 +487,25 @@ _bpftool()
|
|||
COMPREPLY+=( $( compgen -W 'hex' -- "$cur" ) )
|
||||
;;
|
||||
*)
|
||||
case $(_bpftool_map_guess_map_type) in
|
||||
queue|stack)
|
||||
return 0
|
||||
;;
|
||||
esac
|
||||
|
||||
_bpftool_once_attr 'key'
|
||||
return 0
|
||||
;;
|
||||
esac
|
||||
;;
|
||||
update)
|
||||
update|push|enqueue)
|
||||
case $prev in
|
||||
$command)
|
||||
COMPREPLY=( $( compgen -W "$MAP_TYPE" -- "$cur" ) )
|
||||
return 0
|
||||
;;
|
||||
id)
|
||||
_bpftool_map_update_get_id
|
||||
_bpftool_map_update_get_id $command
|
||||
return 0
|
||||
;;
|
||||
key)
|
||||
|
@ -468,7 +514,7 @@ _bpftool()
|
|||
value)
|
||||
# We can have bytes, or references to a prog or a
|
||||
# map, depending on the type of the map to update.
|
||||
case $(_bpftool_map_update_map_type) in
|
||||
case "$(_bpftool_map_guess_map_type)" in
|
||||
array_of_maps|hash_of_maps)
|
||||
local MAP_TYPE='id pinned'
|
||||
COMPREPLY+=( $( compgen -W "$MAP_TYPE" \
|
||||
|
@ -490,6 +536,13 @@ _bpftool()
|
|||
return 0
|
||||
;;
|
||||
*)
|
||||
case $(_bpftool_map_guess_map_type) in
|
||||
queue|stack)
|
||||
_bpftool_once_attr 'value'
|
||||
return 0;
|
||||
;;
|
||||
esac
|
||||
|
||||
_bpftool_once_attr 'key'
|
||||
local UPDATE_FLAGS='any exist noexist'
|
||||
for (( idx=3; idx < ${#words[@]}-1; idx++ )); do
|
||||
|
@ -508,6 +561,7 @@ _bpftool()
|
|||
return 0
|
||||
fi
|
||||
done
|
||||
|
||||
return 0
|
||||
;;
|
||||
esac
|
||||
|
@ -527,7 +581,7 @@ _bpftool()
|
|||
return 0
|
||||
;;
|
||||
id)
|
||||
_bpftool_get_perf_map_ids
|
||||
_bpftool_get_map_ids_for_type perf_event_array
|
||||
return 0
|
||||
;;
|
||||
cpu)
|
||||
|
@ -546,7 +600,8 @@ _bpftool()
|
|||
*)
|
||||
[[ $prev == $object ]] && \
|
||||
COMPREPLY=( $( compgen -W 'delete dump getnext help \
|
||||
lookup pin event_pipe show list update create' -- \
|
||||
lookup pin event_pipe show list update create \
|
||||
peek push enqueue pop dequeue' -- \
|
||||
"$cur" ) )
|
||||
;;
|
||||
esac
|
||||
|
@ -624,6 +679,25 @@ _bpftool()
|
|||
;;
|
||||
esac
|
||||
;;
|
||||
feature)
|
||||
case $command in
|
||||
probe)
|
||||
[[ $prev == "dev" ]] && _sysfs_get_netdevs && return 0
|
||||
[[ $prev == "prefix" ]] && return 0
|
||||
if _bpftool_search_list 'macros'; then
|
||||
COMPREPLY+=( $( compgen -W 'prefix' -- "$cur" ) )
|
||||
else
|
||||
COMPREPLY+=( $( compgen -W 'macros' -- "$cur" ) )
|
||||
fi
|
||||
_bpftool_one_of_list 'kernel dev'
|
||||
return 0
|
||||
;;
|
||||
*)
|
||||
[[ $prev == $object ]] && \
|
||||
COMPREPLY=( $( compgen -W 'help probe' -- "$cur" ) )
|
||||
;;
|
||||
esac
|
||||
;;
|
||||
esac
|
||||
} &&
|
||||
complete -F _bpftool bpftool
|
||||
|
|
|
@ -73,35 +73,104 @@ static int btf_dumper_array(const struct btf_dumper *d, __u32 type_id,
|
|||
return ret;
|
||||
}
|
||||
|
||||
static void btf_int128_print(json_writer_t *jw, const void *data,
|
||||
bool is_plain_text)
|
||||
{
|
||||
/* data points to a __int128 number.
|
||||
* Suppose
|
||||
* int128_num = *(__int128 *)data;
|
||||
* The below formulas shows what upper_num and lower_num represents:
|
||||
* upper_num = int128_num >> 64;
|
||||
* lower_num = int128_num & 0xffffffffFFFFFFFFULL;
|
||||
*/
|
||||
__u64 upper_num, lower_num;
|
||||
|
||||
#ifdef __BIG_ENDIAN_BITFIELD
|
||||
upper_num = *(__u64 *)data;
|
||||
lower_num = *(__u64 *)(data + 8);
|
||||
#else
|
||||
upper_num = *(__u64 *)(data + 8);
|
||||
lower_num = *(__u64 *)data;
|
||||
#endif
|
||||
|
||||
if (is_plain_text) {
|
||||
if (upper_num == 0)
|
||||
jsonw_printf(jw, "0x%llx", lower_num);
|
||||
else
|
||||
jsonw_printf(jw, "0x%llx%016llx", upper_num, lower_num);
|
||||
} else {
|
||||
if (upper_num == 0)
|
||||
jsonw_printf(jw, "\"0x%llx\"", lower_num);
|
||||
else
|
||||
jsonw_printf(jw, "\"0x%llx%016llx\"", upper_num, lower_num);
|
||||
}
|
||||
}
|
||||
|
||||
static void btf_int128_shift(__u64 *print_num, u16 left_shift_bits,
|
||||
u16 right_shift_bits)
|
||||
{
|
||||
__u64 upper_num, lower_num;
|
||||
|
||||
#ifdef __BIG_ENDIAN_BITFIELD
|
||||
upper_num = print_num[0];
|
||||
lower_num = print_num[1];
|
||||
#else
|
||||
upper_num = print_num[1];
|
||||
lower_num = print_num[0];
|
||||
#endif
|
||||
|
||||
/* shake out un-needed bits by shift/or operations */
|
||||
if (left_shift_bits >= 64) {
|
||||
upper_num = lower_num << (left_shift_bits - 64);
|
||||
lower_num = 0;
|
||||
} else {
|
||||
upper_num = (upper_num << left_shift_bits) |
|
||||
(lower_num >> (64 - left_shift_bits));
|
||||
lower_num = lower_num << left_shift_bits;
|
||||
}
|
||||
|
||||
if (right_shift_bits >= 64) {
|
||||
lower_num = upper_num >> (right_shift_bits - 64);
|
||||
upper_num = 0;
|
||||
} else {
|
||||
lower_num = (lower_num >> right_shift_bits) |
|
||||
(upper_num << (64 - right_shift_bits));
|
||||
upper_num = upper_num >> right_shift_bits;
|
||||
}
|
||||
|
||||
#ifdef __BIG_ENDIAN_BITFIELD
|
||||
print_num[0] = upper_num;
|
||||
print_num[1] = lower_num;
|
||||
#else
|
||||
print_num[0] = lower_num;
|
||||
print_num[1] = upper_num;
|
||||
#endif
|
||||
}
|
||||
|
||||
static void btf_dumper_bitfield(__u32 nr_bits, __u8 bit_offset,
|
||||
const void *data, json_writer_t *jw,
|
||||
bool is_plain_text)
|
||||
{
|
||||
int left_shift_bits, right_shift_bits;
|
||||
__u64 print_num[2] = {};
|
||||
int bytes_to_copy;
|
||||
int bits_to_copy;
|
||||
__u64 print_num;
|
||||
|
||||
bits_to_copy = bit_offset + nr_bits;
|
||||
bytes_to_copy = BITS_ROUNDUP_BYTES(bits_to_copy);
|
||||
|
||||
print_num = 0;
|
||||
memcpy(&print_num, data, bytes_to_copy);
|
||||
memcpy(print_num, data, bytes_to_copy);
|
||||
#if defined(__BIG_ENDIAN_BITFIELD)
|
||||
left_shift_bits = bit_offset;
|
||||
#elif defined(__LITTLE_ENDIAN_BITFIELD)
|
||||
left_shift_bits = 64 - bits_to_copy;
|
||||
left_shift_bits = 128 - bits_to_copy;
|
||||
#else
|
||||
#error neither big nor little endian
|
||||
#endif
|
||||
right_shift_bits = 64 - nr_bits;
|
||||
right_shift_bits = 128 - nr_bits;
|
||||
|
||||
print_num <<= left_shift_bits;
|
||||
print_num >>= right_shift_bits;
|
||||
if (is_plain_text)
|
||||
jsonw_printf(jw, "0x%llx", print_num);
|
||||
else
|
||||
jsonw_printf(jw, "%llu", print_num);
|
||||
btf_int128_shift(print_num, left_shift_bits, right_shift_bits);
|
||||
btf_int128_print(jw, print_num, is_plain_text);
|
||||
}
|
||||
|
||||
|
||||
|
@ -113,7 +182,7 @@ static void btf_dumper_int_bits(__u32 int_type, __u8 bit_offset,
|
|||
int total_bits_offset;
|
||||
|
||||
/* bits_offset is at most 7.
|
||||
* BTF_INT_OFFSET() cannot exceed 64 bits.
|
||||
* BTF_INT_OFFSET() cannot exceed 128 bits.
|
||||
*/
|
||||
total_bits_offset = bit_offset + BTF_INT_OFFSET(int_type);
|
||||
data += BITS_ROUNDDOWN_BYTES(total_bits_offset);
|
||||
|
@ -139,6 +208,11 @@ static int btf_dumper_int(const struct btf_type *t, __u8 bit_offset,
|
|||
return 0;
|
||||
}
|
||||
|
||||
if (nr_bits == 128) {
|
||||
btf_int128_print(jw, data, is_plain_text);
|
||||
return 0;
|
||||
}
|
||||
|
||||
switch (BTF_INT_ENCODING(*int_type)) {
|
||||
case 0:
|
||||
if (BTF_INT_BITS(*int_type) == 64)
|
||||
|
|
|
@ -157,6 +157,11 @@ static bool cfg_partition_funcs(struct cfg *cfg, struct bpf_insn *cur,
|
|||
return false;
|
||||
}
|
||||
|
||||
static bool is_jmp_insn(u8 code)
|
||||
{
|
||||
return BPF_CLASS(code) == BPF_JMP || BPF_CLASS(code) == BPF_JMP32;
|
||||
}
|
||||
|
||||
static bool func_partition_bb_head(struct func_node *func)
|
||||
{
|
||||
struct bpf_insn *cur, *end;
|
||||
|
@ -170,7 +175,7 @@ static bool func_partition_bb_head(struct func_node *func)
|
|||
return true;
|
||||
|
||||
for (; cur <= end; cur++) {
|
||||
if (BPF_CLASS(cur->code) == BPF_JMP) {
|
||||
if (is_jmp_insn(cur->code)) {
|
||||
u8 opcode = BPF_OP(cur->code);
|
||||
|
||||
if (opcode == BPF_EXIT || opcode == BPF_CALL)
|
||||
|
@ -296,7 +301,7 @@ static bool func_add_bb_edges(struct func_node *func)
|
|||
e->src = bb;
|
||||
|
||||
insn = bb->tail;
|
||||
if (BPF_CLASS(insn->code) != BPF_JMP ||
|
||||
if (!is_jmp_insn(insn->code) ||
|
||||
BPF_OP(insn->code) == BPF_EXIT) {
|
||||
e->dst = bb_next(bb);
|
||||
e->flags |= EDGE_FLAG_FALLTHROUGH;
|
||||
|
|
|
@ -0,0 +1,764 @@
|
|||
// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
/* Copyright (c) 2019 Netronome Systems, Inc. */
|
||||
|
||||
#include <ctype.h>
|
||||
#include <errno.h>
|
||||
#include <string.h>
|
||||
#include <unistd.h>
|
||||
#include <net/if.h>
|
||||
#include <sys/utsname.h>
|
||||
#include <sys/vfs.h>
|
||||
|
||||
#include <linux/filter.h>
|
||||
#include <linux/limits.h>
|
||||
|
||||
#include <bpf.h>
|
||||
#include <libbpf.h>
|
||||
|
||||
#include "main.h"
|
||||
|
||||
#ifndef PROC_SUPER_MAGIC
|
||||
# define PROC_SUPER_MAGIC 0x9fa0
|
||||
#endif
|
||||
|
||||
enum probe_component {
|
||||
COMPONENT_UNSPEC,
|
||||
COMPONENT_KERNEL,
|
||||
COMPONENT_DEVICE,
|
||||
};
|
||||
|
||||
#define BPF_HELPER_MAKE_ENTRY(name) [BPF_FUNC_ ## name] = "bpf_" # name
|
||||
static const char * const helper_name[] = {
|
||||
__BPF_FUNC_MAPPER(BPF_HELPER_MAKE_ENTRY)
|
||||
};
|
||||
|
||||
#undef BPF_HELPER_MAKE_ENTRY
|
||||
|
||||
/* Miscellaneous utility functions */
|
||||
|
||||
static bool check_procfs(void)
|
||||
{
|
||||
struct statfs st_fs;
|
||||
|
||||
if (statfs("/proc", &st_fs) < 0)
|
||||
return false;
|
||||
if ((unsigned long)st_fs.f_type != PROC_SUPER_MAGIC)
|
||||
return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static void uppercase(char *str, size_t len)
|
||||
{
|
||||
size_t i;
|
||||
|
||||
for (i = 0; i < len && str[i] != '\0'; i++)
|
||||
str[i] = toupper(str[i]);
|
||||
}
|
||||
|
||||
/* Printing utility functions */
|
||||
|
||||
static void
|
||||
print_bool_feature(const char *feat_name, const char *plain_name,
|
||||
const char *define_name, bool res, const char *define_prefix)
|
||||
{
|
||||
if (json_output)
|
||||
jsonw_bool_field(json_wtr, feat_name, res);
|
||||
else if (define_prefix)
|
||||
printf("#define %s%sHAVE_%s\n", define_prefix,
|
||||
res ? "" : "NO_", define_name);
|
||||
else
|
||||
printf("%s is %savailable\n", plain_name, res ? "" : "NOT ");
|
||||
}
|
||||
|
||||
static void print_kernel_option(const char *name, const char *value)
|
||||
{
|
||||
char *endptr;
|
||||
int res;
|
||||
|
||||
/* No support for C-style ouptut */
|
||||
|
||||
if (json_output) {
|
||||
if (!value) {
|
||||
jsonw_null_field(json_wtr, name);
|
||||
return;
|
||||
}
|
||||
errno = 0;
|
||||
res = strtol(value, &endptr, 0);
|
||||
if (!errno && *endptr == '\n')
|
||||
jsonw_int_field(json_wtr, name, res);
|
||||
else
|
||||
jsonw_string_field(json_wtr, name, value);
|
||||
} else {
|
||||
if (value)
|
||||
printf("%s is set to %s\n", name, value);
|
||||
else
|
||||
printf("%s is not set\n", name);
|
||||
}
|
||||
}
|
||||
|
||||
static void
|
||||
print_start_section(const char *json_title, const char *plain_title,
|
||||
const char *define_comment, const char *define_prefix)
|
||||
{
|
||||
if (json_output) {
|
||||
jsonw_name(json_wtr, json_title);
|
||||
jsonw_start_object(json_wtr);
|
||||
} else if (define_prefix) {
|
||||
printf("%s\n", define_comment);
|
||||
} else {
|
||||
printf("%s\n", plain_title);
|
||||
}
|
||||
}
|
||||
|
||||
static void
|
||||
print_end_then_start_section(const char *json_title, const char *plain_title,
|
||||
const char *define_comment,
|
||||
const char *define_prefix)
|
||||
{
|
||||
if (json_output)
|
||||
jsonw_end_object(json_wtr);
|
||||
else
|
||||
printf("\n");
|
||||
|
||||
print_start_section(json_title, plain_title, define_comment,
|
||||
define_prefix);
|
||||
}
|
||||
|
||||
/* Probing functions */
|
||||
|
||||
static int read_procfs(const char *path)
|
||||
{
|
||||
char *endptr, *line = NULL;
|
||||
size_t len = 0;
|
||||
FILE *fd;
|
||||
int res;
|
||||
|
||||
fd = fopen(path, "r");
|
||||
if (!fd)
|
||||
return -1;
|
||||
|
||||
res = getline(&line, &len, fd);
|
||||
fclose(fd);
|
||||
if (res < 0)
|
||||
return -1;
|
||||
|
||||
errno = 0;
|
||||
res = strtol(line, &endptr, 10);
|
||||
if (errno || *line == '\0' || *endptr != '\n')
|
||||
res = -1;
|
||||
free(line);
|
||||
|
||||
return res;
|
||||
}
|
||||
|
||||
static void probe_unprivileged_disabled(void)
|
||||
{
|
||||
int res;
|
||||
|
||||
/* No support for C-style ouptut */
|
||||
|
||||
res = read_procfs("/proc/sys/kernel/unprivileged_bpf_disabled");
|
||||
if (json_output) {
|
||||
jsonw_int_field(json_wtr, "unprivileged_bpf_disabled", res);
|
||||
} else {
|
||||
switch (res) {
|
||||
case 0:
|
||||
printf("bpf() syscall for unprivileged users is enabled\n");
|
||||
break;
|
||||
case 1:
|
||||
printf("bpf() syscall restricted to privileged users\n");
|
||||
break;
|
||||
case -1:
|
||||
printf("Unable to retrieve required privileges for bpf() syscall\n");
|
||||
break;
|
||||
default:
|
||||
printf("bpf() syscall restriction has unknown value %d\n", res);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static void probe_jit_enable(void)
|
||||
{
|
||||
int res;
|
||||
|
||||
/* No support for C-style ouptut */
|
||||
|
||||
res = read_procfs("/proc/sys/net/core/bpf_jit_enable");
|
||||
if (json_output) {
|
||||
jsonw_int_field(json_wtr, "bpf_jit_enable", res);
|
||||
} else {
|
||||
switch (res) {
|
||||
case 0:
|
||||
printf("JIT compiler is disabled\n");
|
||||
break;
|
||||
case 1:
|
||||
printf("JIT compiler is enabled\n");
|
||||
break;
|
||||
case 2:
|
||||
printf("JIT compiler is enabled with debugging traces in kernel logs\n");
|
||||
break;
|
||||
case -1:
|
||||
printf("Unable to retrieve JIT-compiler status\n");
|
||||
break;
|
||||
default:
|
||||
printf("JIT-compiler status has unknown value %d\n",
|
||||
res);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static void probe_jit_harden(void)
|
||||
{
|
||||
int res;
|
||||
|
||||
/* No support for C-style ouptut */
|
||||
|
||||
res = read_procfs("/proc/sys/net/core/bpf_jit_harden");
|
||||
if (json_output) {
|
||||
jsonw_int_field(json_wtr, "bpf_jit_harden", res);
|
||||
} else {
|
||||
switch (res) {
|
||||
case 0:
|
||||
printf("JIT compiler hardening is disabled\n");
|
||||
break;
|
||||
case 1:
|
||||
printf("JIT compiler hardening is enabled for unprivileged users\n");
|
||||
break;
|
||||
case 2:
|
||||
printf("JIT compiler hardening is enabled for all users\n");
|
||||
break;
|
||||
case -1:
|
||||
printf("Unable to retrieve JIT hardening status\n");
|
||||
break;
|
||||
default:
|
||||
printf("JIT hardening status has unknown value %d\n",
|
||||
res);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static void probe_jit_kallsyms(void)
|
||||
{
|
||||
int res;
|
||||
|
||||
/* No support for C-style ouptut */
|
||||
|
||||
res = read_procfs("/proc/sys/net/core/bpf_jit_kallsyms");
|
||||
if (json_output) {
|
||||
jsonw_int_field(json_wtr, "bpf_jit_kallsyms", res);
|
||||
} else {
|
||||
switch (res) {
|
||||
case 0:
|
||||
printf("JIT compiler kallsyms exports are disabled\n");
|
||||
break;
|
||||
case 1:
|
||||
printf("JIT compiler kallsyms exports are enabled for root\n");
|
||||
break;
|
||||
case -1:
|
||||
printf("Unable to retrieve JIT kallsyms export status\n");
|
||||
break;
|
||||
default:
|
||||
printf("JIT kallsyms exports status has unknown value %d\n", res);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static void probe_jit_limit(void)
|
||||
{
|
||||
int res;
|
||||
|
||||
/* No support for C-style ouptut */
|
||||
|
||||
res = read_procfs("/proc/sys/net/core/bpf_jit_limit");
|
||||
if (json_output) {
|
||||
jsonw_int_field(json_wtr, "bpf_jit_limit", res);
|
||||
} else {
|
||||
switch (res) {
|
||||
case -1:
|
||||
printf("Unable to retrieve global memory limit for JIT compiler for unprivileged users\n");
|
||||
break;
|
||||
default:
|
||||
printf("Global memory limit for JIT compiler for unprivileged users is %d bytes\n", res);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static char *get_kernel_config_option(FILE *fd, const char *option)
|
||||
{
|
||||
size_t line_n = 0, optlen = strlen(option);
|
||||
char *res, *strval, *line = NULL;
|
||||
ssize_t n;
|
||||
|
||||
rewind(fd);
|
||||
while ((n = getline(&line, &line_n, fd)) > 0) {
|
||||
if (strncmp(line, option, optlen))
|
||||
continue;
|
||||
/* Check we have at least '=', value, and '\n' */
|
||||
if (strlen(line) < optlen + 3)
|
||||
continue;
|
||||
if (*(line + optlen) != '=')
|
||||
continue;
|
||||
|
||||
/* Trim ending '\n' */
|
||||
line[strlen(line) - 1] = '\0';
|
||||
|
||||
/* Copy and return config option value */
|
||||
strval = line + optlen + 1;
|
||||
res = strdup(strval);
|
||||
free(line);
|
||||
return res;
|
||||
}
|
||||
free(line);
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void probe_kernel_image_config(void)
|
||||
{
|
||||
static const char * const options[] = {
|
||||
/* Enable BPF */
|
||||
"CONFIG_BPF",
|
||||
/* Enable bpf() syscall */
|
||||
"CONFIG_BPF_SYSCALL",
|
||||
/* Does selected architecture support eBPF JIT compiler */
|
||||
"CONFIG_HAVE_EBPF_JIT",
|
||||
/* Compile eBPF JIT compiler */
|
||||
"CONFIG_BPF_JIT",
|
||||
/* Avoid compiling eBPF interpreter (use JIT only) */
|
||||
"CONFIG_BPF_JIT_ALWAYS_ON",
|
||||
|
||||
/* cgroups */
|
||||
"CONFIG_CGROUPS",
|
||||
/* BPF programs attached to cgroups */
|
||||
"CONFIG_CGROUP_BPF",
|
||||
/* bpf_get_cgroup_classid() helper */
|
||||
"CONFIG_CGROUP_NET_CLASSID",
|
||||
/* bpf_skb_{,ancestor_}cgroup_id() helpers */
|
||||
"CONFIG_SOCK_CGROUP_DATA",
|
||||
|
||||
/* Tracing: attach BPF to kprobes, tracepoints, etc. */
|
||||
"CONFIG_BPF_EVENTS",
|
||||
/* Kprobes */
|
||||
"CONFIG_KPROBE_EVENTS",
|
||||
/* Uprobes */
|
||||
"CONFIG_UPROBE_EVENTS",
|
||||
/* Tracepoints */
|
||||
"CONFIG_TRACING",
|
||||
/* Syscall tracepoints */
|
||||
"CONFIG_FTRACE_SYSCALLS",
|
||||
/* bpf_override_return() helper support for selected arch */
|
||||
"CONFIG_FUNCTION_ERROR_INJECTION",
|
||||
/* bpf_override_return() helper */
|
||||
"CONFIG_BPF_KPROBE_OVERRIDE",
|
||||
|
||||
/* Network */
|
||||
"CONFIG_NET",
|
||||
/* AF_XDP sockets */
|
||||
"CONFIG_XDP_SOCKETS",
|
||||
/* BPF_PROG_TYPE_LWT_* and related helpers */
|
||||
"CONFIG_LWTUNNEL_BPF",
|
||||
/* BPF_PROG_TYPE_SCHED_ACT, TC (traffic control) actions */
|
||||
"CONFIG_NET_ACT_BPF",
|
||||
/* BPF_PROG_TYPE_SCHED_CLS, TC filters */
|
||||
"CONFIG_NET_CLS_BPF",
|
||||
/* TC clsact qdisc */
|
||||
"CONFIG_NET_CLS_ACT",
|
||||
/* Ingress filtering with TC */
|
||||
"CONFIG_NET_SCH_INGRESS",
|
||||
/* bpf_skb_get_xfrm_state() helper */
|
||||
"CONFIG_XFRM",
|
||||
/* bpf_get_route_realm() helper */
|
||||
"CONFIG_IP_ROUTE_CLASSID",
|
||||
/* BPF_PROG_TYPE_LWT_SEG6_LOCAL and related helpers */
|
||||
"CONFIG_IPV6_SEG6_BPF",
|
||||
/* BPF_PROG_TYPE_LIRC_MODE2 and related helpers */
|
||||
"CONFIG_BPF_LIRC_MODE2",
|
||||
/* BPF stream parser and BPF socket maps */
|
||||
"CONFIG_BPF_STREAM_PARSER",
|
||||
/* xt_bpf module for passing BPF programs to netfilter */
|
||||
"CONFIG_NETFILTER_XT_MATCH_BPF",
|
||||
/* bpfilter back-end for iptables */
|
||||
"CONFIG_BPFILTER",
|
||||
/* bpftilter module with "user mode helper" */
|
||||
"CONFIG_BPFILTER_UMH",
|
||||
|
||||
/* test_bpf module for BPF tests */
|
||||
"CONFIG_TEST_BPF",
|
||||
};
|
||||
char *value, *buf = NULL;
|
||||
struct utsname utsn;
|
||||
char path[PATH_MAX];
|
||||
size_t i, n;
|
||||
ssize_t ret;
|
||||
FILE *fd;
|
||||
|
||||
if (uname(&utsn))
|
||||
goto no_config;
|
||||
|
||||
snprintf(path, sizeof(path), "/boot/config-%s", utsn.release);
|
||||
|
||||
fd = fopen(path, "r");
|
||||
if (!fd && errno == ENOENT) {
|
||||
/* Some distributions put the config file at /proc/config, give
|
||||
* it a try.
|
||||
* Sometimes it is also at /proc/config.gz but we do not try
|
||||
* this one for now, it would require linking against libz.
|
||||
*/
|
||||
fd = fopen("/proc/config", "r");
|
||||
}
|
||||
if (!fd) {
|
||||
p_info("skipping kernel config, can't open file: %s",
|
||||
strerror(errno));
|
||||
goto no_config;
|
||||
}
|
||||
/* Sanity checks */
|
||||
ret = getline(&buf, &n, fd);
|
||||
ret = getline(&buf, &n, fd);
|
||||
if (!buf || !ret) {
|
||||
p_info("skipping kernel config, can't read from file: %s",
|
||||
strerror(errno));
|
||||
free(buf);
|
||||
goto no_config;
|
||||
}
|
||||
if (strcmp(buf, "# Automatically generated file; DO NOT EDIT.\n")) {
|
||||
p_info("skipping kernel config, can't find correct file");
|
||||
free(buf);
|
||||
goto no_config;
|
||||
}
|
||||
free(buf);
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(options); i++) {
|
||||
value = get_kernel_config_option(fd, options[i]);
|
||||
print_kernel_option(options[i], value);
|
||||
free(value);
|
||||
}
|
||||
fclose(fd);
|
||||
return;
|
||||
|
||||
no_config:
|
||||
for (i = 0; i < ARRAY_SIZE(options); i++)
|
||||
print_kernel_option(options[i], NULL);
|
||||
}
|
||||
|
||||
static bool probe_bpf_syscall(const char *define_prefix)
|
||||
{
|
||||
bool res;
|
||||
|
||||
bpf_load_program(BPF_PROG_TYPE_UNSPEC, NULL, 0, NULL, 0, NULL, 0);
|
||||
res = (errno != ENOSYS);
|
||||
|
||||
print_bool_feature("have_bpf_syscall",
|
||||
"bpf() syscall",
|
||||
"BPF_SYSCALL",
|
||||
res, define_prefix);
|
||||
|
||||
return res;
|
||||
}
|
||||
|
||||
static void
|
||||
probe_prog_type(enum bpf_prog_type prog_type, bool *supported_types,
|
||||
const char *define_prefix, __u32 ifindex)
|
||||
{
|
||||
char feat_name[128], plain_desc[128], define_name[128];
|
||||
const char *plain_comment = "eBPF program_type ";
|
||||
size_t maxlen;
|
||||
bool res;
|
||||
|
||||
if (ifindex)
|
||||
/* Only test offload-able program types */
|
||||
switch (prog_type) {
|
||||
case BPF_PROG_TYPE_SCHED_CLS:
|
||||
case BPF_PROG_TYPE_XDP:
|
||||
break;
|
||||
default:
|
||||
return;
|
||||
}
|
||||
|
||||
res = bpf_probe_prog_type(prog_type, ifindex);
|
||||
|
||||
supported_types[prog_type] |= res;
|
||||
|
||||
maxlen = sizeof(plain_desc) - strlen(plain_comment) - 1;
|
||||
if (strlen(prog_type_name[prog_type]) > maxlen) {
|
||||
p_info("program type name too long");
|
||||
return;
|
||||
}
|
||||
|
||||
sprintf(feat_name, "have_%s_prog_type", prog_type_name[prog_type]);
|
||||
sprintf(define_name, "%s_prog_type", prog_type_name[prog_type]);
|
||||
uppercase(define_name, sizeof(define_name));
|
||||
sprintf(plain_desc, "%s%s", plain_comment, prog_type_name[prog_type]);
|
||||
print_bool_feature(feat_name, plain_desc, define_name, res,
|
||||
define_prefix);
|
||||
}
|
||||
|
||||
static void
|
||||
probe_map_type(enum bpf_map_type map_type, const char *define_prefix,
|
||||
__u32 ifindex)
|
||||
{
|
||||
char feat_name[128], plain_desc[128], define_name[128];
|
||||
const char *plain_comment = "eBPF map_type ";
|
||||
size_t maxlen;
|
||||
bool res;
|
||||
|
||||
res = bpf_probe_map_type(map_type, ifindex);
|
||||
|
||||
maxlen = sizeof(plain_desc) - strlen(plain_comment) - 1;
|
||||
if (strlen(map_type_name[map_type]) > maxlen) {
|
||||
p_info("map type name too long");
|
||||
return;
|
||||
}
|
||||
|
||||
sprintf(feat_name, "have_%s_map_type", map_type_name[map_type]);
|
||||
sprintf(define_name, "%s_map_type", map_type_name[map_type]);
|
||||
uppercase(define_name, sizeof(define_name));
|
||||
sprintf(plain_desc, "%s%s", plain_comment, map_type_name[map_type]);
|
||||
print_bool_feature(feat_name, plain_desc, define_name, res,
|
||||
define_prefix);
|
||||
}
|
||||
|
||||
static void
|
||||
probe_helpers_for_progtype(enum bpf_prog_type prog_type, bool supported_type,
|
||||
const char *define_prefix, __u32 ifindex)
|
||||
{
|
||||
const char *ptype_name = prog_type_name[prog_type];
|
||||
char feat_name[128];
|
||||
unsigned int id;
|
||||
bool res;
|
||||
|
||||
if (ifindex)
|
||||
/* Only test helpers for offload-able program types */
|
||||
switch (prog_type) {
|
||||
case BPF_PROG_TYPE_SCHED_CLS:
|
||||
case BPF_PROG_TYPE_XDP:
|
||||
break;
|
||||
default:
|
||||
return;
|
||||
}
|
||||
|
||||
if (json_output) {
|
||||
sprintf(feat_name, "%s_available_helpers", ptype_name);
|
||||
jsonw_name(json_wtr, feat_name);
|
||||
jsonw_start_array(json_wtr);
|
||||
} else if (!define_prefix) {
|
||||
printf("eBPF helpers supported for program type %s:",
|
||||
ptype_name);
|
||||
}
|
||||
|
||||
for (id = 1; id < ARRAY_SIZE(helper_name); id++) {
|
||||
if (!supported_type)
|
||||
res = false;
|
||||
else
|
||||
res = bpf_probe_helper(id, prog_type, ifindex);
|
||||
|
||||
if (json_output) {
|
||||
if (res)
|
||||
jsonw_string(json_wtr, helper_name[id]);
|
||||
} else if (define_prefix) {
|
||||
printf("#define %sBPF__PROG_TYPE_%s__HELPER_%s %s\n",
|
||||
define_prefix, ptype_name, helper_name[id],
|
||||
res ? "1" : "0");
|
||||
} else {
|
||||
if (res)
|
||||
printf("\n\t- %s", helper_name[id]);
|
||||
}
|
||||
}
|
||||
|
||||
if (json_output)
|
||||
jsonw_end_array(json_wtr);
|
||||
else if (!define_prefix)
|
||||
printf("\n");
|
||||
}
|
||||
|
||||
static int do_probe(int argc, char **argv)
|
||||
{
|
||||
enum probe_component target = COMPONENT_UNSPEC;
|
||||
const char *define_prefix = NULL;
|
||||
bool supported_types[128] = {};
|
||||
__u32 ifindex = 0;
|
||||
unsigned int i;
|
||||
char *ifname;
|
||||
|
||||
/* Detection assumes user has sufficient privileges (CAP_SYS_ADMIN).
|
||||
* Let's approximate, and restrict usage to root user only.
|
||||
*/
|
||||
if (geteuid()) {
|
||||
p_err("please run this command as root user");
|
||||
return -1;
|
||||
}
|
||||
|
||||
set_max_rlimit();
|
||||
|
||||
while (argc) {
|
||||
if (is_prefix(*argv, "kernel")) {
|
||||
if (target != COMPONENT_UNSPEC) {
|
||||
p_err("component to probe already specified");
|
||||
return -1;
|
||||
}
|
||||
target = COMPONENT_KERNEL;
|
||||
NEXT_ARG();
|
||||
} else if (is_prefix(*argv, "dev")) {
|
||||
NEXT_ARG();
|
||||
|
||||
if (target != COMPONENT_UNSPEC || ifindex) {
|
||||
p_err("component to probe already specified");
|
||||
return -1;
|
||||
}
|
||||
if (!REQ_ARGS(1))
|
||||
return -1;
|
||||
|
||||
target = COMPONENT_DEVICE;
|
||||
ifname = GET_ARG();
|
||||
ifindex = if_nametoindex(ifname);
|
||||
if (!ifindex) {
|
||||
p_err("unrecognized netdevice '%s': %s", ifname,
|
||||
strerror(errno));
|
||||
return -1;
|
||||
}
|
||||
} else if (is_prefix(*argv, "macros") && !define_prefix) {
|
||||
define_prefix = "";
|
||||
NEXT_ARG();
|
||||
} else if (is_prefix(*argv, "prefix")) {
|
||||
if (!define_prefix) {
|
||||
p_err("'prefix' argument can only be use after 'macros'");
|
||||
return -1;
|
||||
}
|
||||
if (strcmp(define_prefix, "")) {
|
||||
p_err("'prefix' already defined");
|
||||
return -1;
|
||||
}
|
||||
NEXT_ARG();
|
||||
|
||||
if (!REQ_ARGS(1))
|
||||
return -1;
|
||||
define_prefix = GET_ARG();
|
||||
} else {
|
||||
p_err("expected no more arguments, 'kernel', 'dev', 'macros' or 'prefix', got: '%s'?",
|
||||
*argv);
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
|
||||
if (json_output) {
|
||||
define_prefix = NULL;
|
||||
jsonw_start_object(json_wtr);
|
||||
}
|
||||
|
||||
switch (target) {
|
||||
case COMPONENT_KERNEL:
|
||||
case COMPONENT_UNSPEC:
|
||||
if (define_prefix)
|
||||
break;
|
||||
|
||||
print_start_section("system_config",
|
||||
"Scanning system configuration...",
|
||||
NULL, /* define_comment never used here */
|
||||
NULL); /* define_prefix always NULL here */
|
||||
if (check_procfs()) {
|
||||
probe_unprivileged_disabled();
|
||||
probe_jit_enable();
|
||||
probe_jit_harden();
|
||||
probe_jit_kallsyms();
|
||||
probe_jit_limit();
|
||||
} else {
|
||||
p_info("/* procfs not mounted, skipping related probes */");
|
||||
}
|
||||
probe_kernel_image_config();
|
||||
if (json_output)
|
||||
jsonw_end_object(json_wtr);
|
||||
else
|
||||
printf("\n");
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
print_start_section("syscall_config",
|
||||
"Scanning system call availability...",
|
||||
"/*** System call availability ***/",
|
||||
define_prefix);
|
||||
|
||||
if (!probe_bpf_syscall(define_prefix))
|
||||
/* bpf() syscall unavailable, don't probe other BPF features */
|
||||
goto exit_close_json;
|
||||
|
||||
print_end_then_start_section("program_types",
|
||||
"Scanning eBPF program types...",
|
||||
"/*** eBPF program types ***/",
|
||||
define_prefix);
|
||||
|
||||
for (i = BPF_PROG_TYPE_UNSPEC + 1; i < ARRAY_SIZE(prog_type_name); i++)
|
||||
probe_prog_type(i, supported_types, define_prefix, ifindex);
|
||||
|
||||
print_end_then_start_section("map_types",
|
||||
"Scanning eBPF map types...",
|
||||
"/*** eBPF map types ***/",
|
||||
define_prefix);
|
||||
|
||||
for (i = BPF_MAP_TYPE_UNSPEC + 1; i < map_type_name_size; i++)
|
||||
probe_map_type(i, define_prefix, ifindex);
|
||||
|
||||
print_end_then_start_section("helpers",
|
||||
"Scanning eBPF helper functions...",
|
||||
"/*** eBPF helper functions ***/",
|
||||
define_prefix);
|
||||
|
||||
if (define_prefix)
|
||||
printf("/*\n"
|
||||
" * Use %sHAVE_PROG_TYPE_HELPER(prog_type_name, helper_name)\n"
|
||||
" * to determine if <helper_name> is available for <prog_type_name>,\n"
|
||||
" * e.g.\n"
|
||||
" * #if %sHAVE_PROG_TYPE_HELPER(xdp, bpf_redirect)\n"
|
||||
" * // do stuff with this helper\n"
|
||||
" * #elif\n"
|
||||
" * // use a workaround\n"
|
||||
" * #endif\n"
|
||||
" */\n"
|
||||
"#define %sHAVE_PROG_TYPE_HELPER(prog_type, helper) \\\n"
|
||||
" %sBPF__PROG_TYPE_ ## prog_type ## __HELPER_ ## helper\n",
|
||||
define_prefix, define_prefix, define_prefix,
|
||||
define_prefix);
|
||||
for (i = BPF_PROG_TYPE_UNSPEC + 1; i < ARRAY_SIZE(prog_type_name); i++)
|
||||
probe_helpers_for_progtype(i, supported_types[i],
|
||||
define_prefix, ifindex);
|
||||
|
||||
exit_close_json:
|
||||
if (json_output) {
|
||||
/* End current "section" of probes */
|
||||
jsonw_end_object(json_wtr);
|
||||
/* End root object */
|
||||
jsonw_end_object(json_wtr);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int do_help(int argc, char **argv)
|
||||
{
|
||||
if (json_output) {
|
||||
jsonw_null(json_wtr);
|
||||
return 0;
|
||||
}
|
||||
|
||||
fprintf(stderr,
|
||||
"Usage: %s %s probe [COMPONENT] [macros [prefix PREFIX]]\n"
|
||||
" %s %s help\n"
|
||||
"\n"
|
||||
" COMPONENT := { kernel | dev NAME }\n"
|
||||
"",
|
||||
bin_name, argv[-2], bin_name, argv[-2]);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct cmd cmds[] = {
|
||||
{ "probe", do_probe },
|
||||
{ "help", do_help },
|
||||
{ 0 }
|
||||
};
|
||||
|
||||
int do_feature(int argc, char **argv)
|
||||
{
|
||||
return cmd_select(cmds, argc, argv, do_help);
|
||||
}
|
|
@ -56,7 +56,7 @@ static int do_help(int argc, char **argv)
|
|||
" %s batch file FILE\n"
|
||||
" %s version\n"
|
||||
"\n"
|
||||
" OBJECT := { prog | map | cgroup | perf | net }\n"
|
||||
" OBJECT := { prog | map | cgroup | perf | net | feature }\n"
|
||||
" " HELP_SPEC_OPTIONS "\n"
|
||||
"",
|
||||
bin_name, bin_name, bin_name);
|
||||
|
@ -187,6 +187,7 @@ static const struct cmd cmds[] = {
|
|||
{ "cgroup", do_cgroup },
|
||||
{ "perf", do_perf },
|
||||
{ "net", do_net },
|
||||
{ "feature", do_feature },
|
||||
{ "version", do_version },
|
||||
{ 0 }
|
||||
};
|
||||
|
|
|
@ -75,6 +75,9 @@ static const char * const prog_type_name[] = {
|
|||
[BPF_PROG_TYPE_FLOW_DISSECTOR] = "flow_dissector",
|
||||
};
|
||||
|
||||
extern const char * const map_type_name[];
|
||||
extern const size_t map_type_name_size;
|
||||
|
||||
enum bpf_obj_type {
|
||||
BPF_OBJ_UNKNOWN,
|
||||
BPF_OBJ_PROG,
|
||||
|
@ -145,6 +148,7 @@ int do_cgroup(int argc, char **arg);
|
|||
int do_perf(int argc, char **arg);
|
||||
int do_net(int argc, char **arg);
|
||||
int do_tracelog(int argc, char **arg);
|
||||
int do_feature(int argc, char **argv);
|
||||
|
||||
int parse_u32_arg(int *argc, char ***argv, __u32 *val, const char *what);
|
||||
int prog_parse_fd(int *argc, char ***argv);
|
||||
|
|
|
@ -21,7 +21,7 @@
|
|||
#include "json_writer.h"
|
||||
#include "main.h"
|
||||
|
||||
static const char * const map_type_name[] = {
|
||||
const char * const map_type_name[] = {
|
||||
[BPF_MAP_TYPE_UNSPEC] = "unspec",
|
||||
[BPF_MAP_TYPE_HASH] = "hash",
|
||||
[BPF_MAP_TYPE_ARRAY] = "array",
|
||||
|
@ -48,6 +48,8 @@ static const char * const map_type_name[] = {
|
|||
[BPF_MAP_TYPE_STACK] = "stack",
|
||||
};
|
||||
|
||||
const size_t map_type_name_size = ARRAY_SIZE(map_type_name);
|
||||
|
||||
static bool map_is_per_cpu(__u32 type)
|
||||
{
|
||||
return type == BPF_MAP_TYPE_PERCPU_HASH ||
|
||||
|
@ -285,16 +287,21 @@ static void print_entry_plain(struct bpf_map_info *info, unsigned char *key,
|
|||
single_line = info->key_size + info->value_size <= 24 &&
|
||||
!break_names;
|
||||
|
||||
printf("key:%c", break_names ? '\n' : ' ');
|
||||
fprint_hex(stdout, key, info->key_size, " ");
|
||||
if (info->key_size) {
|
||||
printf("key:%c", break_names ? '\n' : ' ');
|
||||
fprint_hex(stdout, key, info->key_size, " ");
|
||||
|
||||
printf(single_line ? " " : "\n");
|
||||
printf(single_line ? " " : "\n");
|
||||
}
|
||||
|
||||
printf("value:%c", break_names ? '\n' : ' ');
|
||||
if (value)
|
||||
fprint_hex(stdout, value, info->value_size, " ");
|
||||
else
|
||||
printf("<no entry>");
|
||||
if (info->value_size) {
|
||||
printf("value:%c", break_names ? '\n' : ' ');
|
||||
if (value)
|
||||
fprint_hex(stdout, value, info->value_size,
|
||||
" ");
|
||||
else
|
||||
printf("<no entry>");
|
||||
}
|
||||
|
||||
printf("\n");
|
||||
} else {
|
||||
|
@ -303,19 +310,23 @@ static void print_entry_plain(struct bpf_map_info *info, unsigned char *key,
|
|||
n = get_possible_cpus();
|
||||
step = round_up(info->value_size, 8);
|
||||
|
||||
printf("key:\n");
|
||||
fprint_hex(stdout, key, info->key_size, " ");
|
||||
printf("\n");
|
||||
for (i = 0; i < n; i++) {
|
||||
printf("value (CPU %02d):%c",
|
||||
i, info->value_size > 16 ? '\n' : ' ');
|
||||
if (value)
|
||||
fprint_hex(stdout, value + i * step,
|
||||
info->value_size, " ");
|
||||
else
|
||||
printf("<no entry>");
|
||||
if (info->key_size) {
|
||||
printf("key:\n");
|
||||
fprint_hex(stdout, key, info->key_size, " ");
|
||||
printf("\n");
|
||||
}
|
||||
if (info->value_size) {
|
||||
for (i = 0; i < n; i++) {
|
||||
printf("value (CPU %02d):%c",
|
||||
i, info->value_size > 16 ? '\n' : ' ');
|
||||
if (value)
|
||||
fprint_hex(stdout, value + i * step,
|
||||
info->value_size, " ");
|
||||
else
|
||||
printf("<no entry>");
|
||||
printf("\n");
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -415,6 +426,9 @@ static int parse_elem(char **argv, struct bpf_map_info *info,
|
|||
p_err("not enough value arguments for map of progs");
|
||||
return -1;
|
||||
}
|
||||
if (is_prefix(*argv, "id"))
|
||||
p_info("Warning: updating program array via MAP_ID, make sure this map is kept open\n"
|
||||
" by some process or pinned otherwise update will be lost");
|
||||
|
||||
fd = prog_parse_fd(&argc, &argv);
|
||||
if (fd < 0)
|
||||
|
@ -779,6 +793,32 @@ exit_free:
|
|||
return err;
|
||||
}
|
||||
|
||||
static int alloc_key_value(struct bpf_map_info *info, void **key, void **value)
|
||||
{
|
||||
*key = NULL;
|
||||
*value = NULL;
|
||||
|
||||
if (info->key_size) {
|
||||
*key = malloc(info->key_size);
|
||||
if (!*key) {
|
||||
p_err("key mem alloc failed");
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
|
||||
if (info->value_size) {
|
||||
*value = alloc_value(info);
|
||||
if (!*value) {
|
||||
p_err("value mem alloc failed");
|
||||
free(*key);
|
||||
*key = NULL;
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int do_update(int argc, char **argv)
|
||||
{
|
||||
struct bpf_map_info info = {};
|
||||
|
@ -795,13 +835,9 @@ static int do_update(int argc, char **argv)
|
|||
if (fd < 0)
|
||||
return -1;
|
||||
|
||||
key = malloc(info.key_size);
|
||||
value = alloc_value(&info);
|
||||
if (!key || !value) {
|
||||
p_err("mem alloc failed");
|
||||
err = -1;
|
||||
err = alloc_key_value(&info, &key, &value);
|
||||
if (err)
|
||||
goto exit_free;
|
||||
}
|
||||
|
||||
err = parse_elem(argv, &info, key, value, info.key_size,
|
||||
info.value_size, &flags, &value_fd);
|
||||
|
@ -826,12 +862,51 @@ exit_free:
|
|||
return err;
|
||||
}
|
||||
|
||||
static void print_key_value(struct bpf_map_info *info, void *key,
|
||||
void *value)
|
||||
{
|
||||
json_writer_t *btf_wtr;
|
||||
struct btf *btf = NULL;
|
||||
int err;
|
||||
|
||||
err = btf__get_from_id(info->btf_id, &btf);
|
||||
if (err) {
|
||||
p_err("failed to get btf");
|
||||
return;
|
||||
}
|
||||
|
||||
if (json_output) {
|
||||
print_entry_json(info, key, value, btf);
|
||||
} else if (btf) {
|
||||
/* if here json_wtr wouldn't have been initialised,
|
||||
* so let's create separate writer for btf
|
||||
*/
|
||||
btf_wtr = get_btf_writer();
|
||||
if (!btf_wtr) {
|
||||
p_info("failed to create json writer for btf. falling back to plain output");
|
||||
btf__free(btf);
|
||||
btf = NULL;
|
||||
print_entry_plain(info, key, value);
|
||||
} else {
|
||||
struct btf_dumper d = {
|
||||
.btf = btf,
|
||||
.jw = btf_wtr,
|
||||
.is_plain_text = true,
|
||||
};
|
||||
|
||||
do_dump_btf(&d, info, key, value);
|
||||
jsonw_destroy(&btf_wtr);
|
||||
}
|
||||
} else {
|
||||
print_entry_plain(info, key, value);
|
||||
}
|
||||
btf__free(btf);
|
||||
}
|
||||
|
||||
static int do_lookup(int argc, char **argv)
|
||||
{
|
||||
struct bpf_map_info info = {};
|
||||
__u32 len = sizeof(info);
|
||||
json_writer_t *btf_wtr;
|
||||
struct btf *btf = NULL;
|
||||
void *key, *value;
|
||||
int err;
|
||||
int fd;
|
||||
|
@ -843,13 +918,9 @@ static int do_lookup(int argc, char **argv)
|
|||
if (fd < 0)
|
||||
return -1;
|
||||
|
||||
key = malloc(info.key_size);
|
||||
value = alloc_value(&info);
|
||||
if (!key || !value) {
|
||||
p_err("mem alloc failed");
|
||||
err = -1;
|
||||
err = alloc_key_value(&info, &key, &value);
|
||||
if (err)
|
||||
goto exit_free;
|
||||
}
|
||||
|
||||
err = parse_elem(argv, &info, key, NULL, info.key_size, 0, NULL, NULL);
|
||||
if (err)
|
||||
|
@ -873,43 +944,12 @@ static int do_lookup(int argc, char **argv)
|
|||
}
|
||||
|
||||
/* here means bpf_map_lookup_elem() succeeded */
|
||||
err = btf__get_from_id(info.btf_id, &btf);
|
||||
if (err) {
|
||||
p_err("failed to get btf");
|
||||
goto exit_free;
|
||||
}
|
||||
|
||||
if (json_output) {
|
||||
print_entry_json(&info, key, value, btf);
|
||||
} else if (btf) {
|
||||
/* if here json_wtr wouldn't have been initialised,
|
||||
* so let's create separate writer for btf
|
||||
*/
|
||||
btf_wtr = get_btf_writer();
|
||||
if (!btf_wtr) {
|
||||
p_info("failed to create json writer for btf. falling back to plain output");
|
||||
btf__free(btf);
|
||||
btf = NULL;
|
||||
print_entry_plain(&info, key, value);
|
||||
} else {
|
||||
struct btf_dumper d = {
|
||||
.btf = btf,
|
||||
.jw = btf_wtr,
|
||||
.is_plain_text = true,
|
||||
};
|
||||
|
||||
do_dump_btf(&d, &info, key, value);
|
||||
jsonw_destroy(&btf_wtr);
|
||||
}
|
||||
} else {
|
||||
print_entry_plain(&info, key, value);
|
||||
}
|
||||
print_key_value(&info, key, value);
|
||||
|
||||
exit_free:
|
||||
free(key);
|
||||
free(value);
|
||||
close(fd);
|
||||
btf__free(btf);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
@ -1122,6 +1162,49 @@ static int do_create(int argc, char **argv)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int do_pop_dequeue(int argc, char **argv)
|
||||
{
|
||||
struct bpf_map_info info = {};
|
||||
__u32 len = sizeof(info);
|
||||
void *key, *value;
|
||||
int err;
|
||||
int fd;
|
||||
|
||||
if (argc < 2)
|
||||
usage();
|
||||
|
||||
fd = map_parse_fd_and_info(&argc, &argv, &info, &len);
|
||||
if (fd < 0)
|
||||
return -1;
|
||||
|
||||
err = alloc_key_value(&info, &key, &value);
|
||||
if (err)
|
||||
goto exit_free;
|
||||
|
||||
err = bpf_map_lookup_and_delete_elem(fd, key, value);
|
||||
if (err) {
|
||||
if (errno == ENOENT) {
|
||||
if (json_output)
|
||||
jsonw_null(json_wtr);
|
||||
else
|
||||
printf("Error: empty map\n");
|
||||
} else {
|
||||
p_err("pop failed: %s", strerror(errno));
|
||||
}
|
||||
|
||||
goto exit_free;
|
||||
}
|
||||
|
||||
print_key_value(&info, key, value);
|
||||
|
||||
exit_free:
|
||||
free(key);
|
||||
free(value);
|
||||
close(fd);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static int do_help(int argc, char **argv)
|
||||
{
|
||||
if (json_output) {
|
||||
|
@ -1135,12 +1218,17 @@ static int do_help(int argc, char **argv)
|
|||
" entries MAX_ENTRIES name NAME [flags FLAGS] \\\n"
|
||||
" [dev NAME]\n"
|
||||
" %s %s dump MAP\n"
|
||||
" %s %s update MAP key DATA value VALUE [UPDATE_FLAGS]\n"
|
||||
" %s %s lookup MAP key DATA\n"
|
||||
" %s %s update MAP [key DATA] [value VALUE] [UPDATE_FLAGS]\n"
|
||||
" %s %s lookup MAP [key DATA]\n"
|
||||
" %s %s getnext MAP [key DATA]\n"
|
||||
" %s %s delete MAP key DATA\n"
|
||||
" %s %s pin MAP FILE\n"
|
||||
" %s %s event_pipe MAP [cpu N index M]\n"
|
||||
" %s %s peek MAP\n"
|
||||
" %s %s push MAP value VALUE\n"
|
||||
" %s %s pop MAP\n"
|
||||
" %s %s enqueue MAP value VALUE\n"
|
||||
" %s %s dequeue MAP\n"
|
||||
" %s %s help\n"
|
||||
"\n"
|
||||
" " HELP_SPEC_MAP "\n"
|
||||
|
@ -1158,7 +1246,8 @@ static int do_help(int argc, char **argv)
|
|||
bin_name, argv[-2], bin_name, argv[-2], bin_name, argv[-2],
|
||||
bin_name, argv[-2], bin_name, argv[-2], bin_name, argv[-2],
|
||||
bin_name, argv[-2], bin_name, argv[-2], bin_name, argv[-2],
|
||||
bin_name, argv[-2]);
|
||||
bin_name, argv[-2], bin_name, argv[-2], bin_name, argv[-2],
|
||||
bin_name, argv[-2], bin_name, argv[-2], bin_name, argv[-2]);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -1175,6 +1264,11 @@ static const struct cmd cmds[] = {
|
|||
{ "pin", do_pin },
|
||||
{ "event_pipe", do_event_pipe },
|
||||
{ "create", do_create },
|
||||
{ "peek", do_lookup },
|
||||
{ "push", do_update },
|
||||
{ "enqueue", do_update },
|
||||
{ "pop", do_pop_dequeue },
|
||||
{ "dequeue", do_pop_dequeue },
|
||||
{ 0 }
|
||||
};
|
||||
|
||||
|
|
|
@ -930,10 +930,9 @@ static int load_with_options(int argc, char **argv, bool first_prog_only)
|
|||
err = libbpf_prog_type_by_name(type, &attr.prog_type,
|
||||
&expected_attach_type);
|
||||
free(type);
|
||||
if (err < 0) {
|
||||
p_err("unknown program type '%s'", *argv);
|
||||
if (err < 0)
|
||||
goto err_free_reuse_maps;
|
||||
}
|
||||
|
||||
NEXT_ARG();
|
||||
} else if (is_prefix(*argv, "map")) {
|
||||
void *new_map_replace;
|
||||
|
@ -1028,11 +1027,8 @@ static int load_with_options(int argc, char **argv, bool first_prog_only)
|
|||
|
||||
err = libbpf_prog_type_by_name(sec_name, &prog_type,
|
||||
&expected_attach_type);
|
||||
if (err < 0) {
|
||||
p_err("failed to guess program type based on section name %s\n",
|
||||
sec_name);
|
||||
if (err < 0)
|
||||
goto err_close_obj;
|
||||
}
|
||||
}
|
||||
|
||||
bpf_program__set_ifindex(pos, ifindex);
|
||||
|
|
|
@ -199,6 +199,16 @@
|
|||
.off = OFF, \
|
||||
.imm = 0 })
|
||||
|
||||
/* Like BPF_JMP_REG, but with 32-bit wide operands for comparison. */
|
||||
|
||||
#define BPF_JMP32_REG(OP, DST, SRC, OFF) \
|
||||
((struct bpf_insn) { \
|
||||
.code = BPF_JMP32 | BPF_OP(OP) | BPF_X, \
|
||||
.dst_reg = DST, \
|
||||
.src_reg = SRC, \
|
||||
.off = OFF, \
|
||||
.imm = 0 })
|
||||
|
||||
/* Conditional jumps against immediates, if (dst_reg 'op' imm32) goto pc + off16 */
|
||||
|
||||
#define BPF_JMP_IMM(OP, DST, IMM, OFF) \
|
||||
|
@ -209,6 +219,16 @@
|
|||
.off = OFF, \
|
||||
.imm = IMM })
|
||||
|
||||
/* Like BPF_JMP_IMM, but with 32-bit wide operands for comparison. */
|
||||
|
||||
#define BPF_JMP32_IMM(OP, DST, IMM, OFF) \
|
||||
((struct bpf_insn) { \
|
||||
.code = BPF_JMP32 | BPF_OP(OP) | BPF_K, \
|
||||
.dst_reg = DST, \
|
||||
.src_reg = 0, \
|
||||
.off = OFF, \
|
||||
.imm = IMM })
|
||||
|
||||
/* Unconditional jumps, goto pc + off16 */
|
||||
|
||||
#define BPF_JMP_A(OFF) \
|
||||
|
|
|
@ -14,6 +14,7 @@
|
|||
/* Extended instruction set based on top of classic BPF */
|
||||
|
||||
/* instruction classes */
|
||||
#define BPF_JMP32 0x06 /* jmp mode in word width */
|
||||
#define BPF_ALU64 0x07 /* alu mode in double word width */
|
||||
|
||||
/* ld/ldx fields */
|
||||
|
@ -2540,6 +2541,7 @@ struct __sk_buff {
|
|||
__bpf_md_ptr(struct bpf_flow_keys *, flow_keys);
|
||||
__u64 tstamp;
|
||||
__u32 wire_len;
|
||||
__u32 gso_segs;
|
||||
};
|
||||
|
||||
struct bpf_tunnel_key {
|
||||
|
|
|
@ -1 +1 @@
|
|||
libbpf-y := libbpf.o bpf.o nlattr.o btf.o libbpf_errno.o str_error.o netlink.o bpf_prog_linfo.o
|
||||
libbpf-y := libbpf.o bpf.o nlattr.o btf.o libbpf_errno.o str_error.o netlink.o bpf_prog_linfo.o libbpf_probes.o
|
||||
|
|
|
@ -14,21 +14,6 @@ srctree := $(patsubst %/,%,$(dir $(srctree)))
|
|||
#$(info Determined 'srctree' to be $(srctree))
|
||||
endif
|
||||
|
||||
# Makefiles suck: This macro sets a default value of $(2) for the
|
||||
# variable named by $(1), unless the variable has been set by
|
||||
# environment or command line. This is necessary for CC and AR
|
||||
# because make sets default values, so the simpler ?= approach
|
||||
# won't work as expected.
|
||||
define allow-override
|
||||
$(if $(or $(findstring environment,$(origin $(1))),\
|
||||
$(findstring command line,$(origin $(1)))),,\
|
||||
$(eval $(1) = $(2)))
|
||||
endef
|
||||
|
||||
# Allow setting CC and AR, or setting CROSS_COMPILE as a prefix.
|
||||
$(call allow-override,CC,$(CROSS_COMPILE)gcc)
|
||||
$(call allow-override,AR,$(CROSS_COMPILE)ar)
|
||||
|
||||
INSTALL = install
|
||||
|
||||
# Use DESTDIR for installing into a different root directory.
|
||||
|
@ -54,7 +39,7 @@ man_dir_SQ = '$(subst ','\'',$(man_dir))'
|
|||
export man_dir man_dir_SQ INSTALL
|
||||
export DESTDIR DESTDIR_SQ
|
||||
|
||||
include ../../scripts/Makefile.include
|
||||
include $(srctree)/tools/scripts/Makefile.include
|
||||
|
||||
# copy a bit from Linux kbuild
|
||||
|
||||
|
|
|
@ -2667,9 +2667,38 @@ static const struct {
|
|||
#undef BPF_EAPROG_SEC
|
||||
#undef BPF_APROG_COMPAT
|
||||
|
||||
#define MAX_TYPE_NAME_SIZE 32
|
||||
|
||||
static char *libbpf_get_type_names(bool attach_type)
|
||||
{
|
||||
int i, len = ARRAY_SIZE(section_names) * MAX_TYPE_NAME_SIZE;
|
||||
char *buf;
|
||||
|
||||
buf = malloc(len);
|
||||
if (!buf)
|
||||
return NULL;
|
||||
|
||||
buf[0] = '\0';
|
||||
/* Forge string buf with all available names */
|
||||
for (i = 0; i < ARRAY_SIZE(section_names); i++) {
|
||||
if (attach_type && !section_names[i].is_attachable)
|
||||
continue;
|
||||
|
||||
if (strlen(buf) + strlen(section_names[i].sec) + 2 > len) {
|
||||
free(buf);
|
||||
return NULL;
|
||||
}
|
||||
strcat(buf, " ");
|
||||
strcat(buf, section_names[i].sec);
|
||||
}
|
||||
|
||||
return buf;
|
||||
}
|
||||
|
||||
int libbpf_prog_type_by_name(const char *name, enum bpf_prog_type *prog_type,
|
||||
enum bpf_attach_type *expected_attach_type)
|
||||
{
|
||||
char *type_names;
|
||||
int i;
|
||||
|
||||
if (!name)
|
||||
|
@ -2682,12 +2711,20 @@ int libbpf_prog_type_by_name(const char *name, enum bpf_prog_type *prog_type,
|
|||
*expected_attach_type = section_names[i].expected_attach_type;
|
||||
return 0;
|
||||
}
|
||||
pr_warning("failed to guess program type based on ELF section name '%s'\n", name);
|
||||
type_names = libbpf_get_type_names(false);
|
||||
if (type_names != NULL) {
|
||||
pr_info("supported section(type) names are:%s\n", type_names);
|
||||
free(type_names);
|
||||
}
|
||||
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
int libbpf_attach_type_by_name(const char *name,
|
||||
enum bpf_attach_type *attach_type)
|
||||
{
|
||||
char *type_names;
|
||||
int i;
|
||||
|
||||
if (!name)
|
||||
|
@ -2701,6 +2738,13 @@ int libbpf_attach_type_by_name(const char *name,
|
|||
*attach_type = section_names[i].attach_type;
|
||||
return 0;
|
||||
}
|
||||
pr_warning("failed to guess attach type based on ELF section name '%s'\n", name);
|
||||
type_names = libbpf_get_type_names(true);
|
||||
if (type_names != NULL) {
|
||||
pr_info("attachable section(type) names are:%s\n", type_names);
|
||||
free(type_names);
|
||||
}
|
||||
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
|
@ -2907,8 +2951,6 @@ int bpf_prog_load_xattr(const struct bpf_prog_load_attr *attr,
|
|||
err = bpf_program__identify_section(prog, &prog_type,
|
||||
&expected_attach_type);
|
||||
if (err < 0) {
|
||||
pr_warning("failed to guess program type based on section name %s\n",
|
||||
prog->section_name);
|
||||
bpf_object__close(obj);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
|
|
@ -355,6 +355,20 @@ LIBBPF_API const struct bpf_line_info *
|
|||
bpf_prog_linfo__lfind(const struct bpf_prog_linfo *prog_linfo,
|
||||
__u32 insn_off, __u32 nr_skip);
|
||||
|
||||
/*
|
||||
* Probe for supported system features
|
||||
*
|
||||
* Note that running many of these probes in a short amount of time can cause
|
||||
* the kernel to reach the maximal size of lockable memory allowed for the
|
||||
* user, causing subsequent probes to fail. In this case, the caller may want
|
||||
* to adjust that limit with setrlimit().
|
||||
*/
|
||||
LIBBPF_API bool bpf_probe_prog_type(enum bpf_prog_type prog_type,
|
||||
__u32 ifindex);
|
||||
LIBBPF_API bool bpf_probe_map_type(enum bpf_map_type map_type, __u32 ifindex);
|
||||
LIBBPF_API bool bpf_probe_helper(enum bpf_func_id id,
|
||||
enum bpf_prog_type prog_type, __u32 ifindex);
|
||||
|
||||
#ifdef __cplusplus
|
||||
} /* extern "C" */
|
||||
#endif
|
||||
|
|
|
@ -124,3 +124,10 @@ LIBBPF_0.0.1 {
|
|||
local:
|
||||
*;
|
||||
};
|
||||
|
||||
LIBBPF_0.0.2 {
|
||||
global:
|
||||
bpf_probe_helper;
|
||||
bpf_probe_map_type;
|
||||
bpf_probe_prog_type;
|
||||
} LIBBPF_0.0.1;
|
||||
|
|
|
@ -0,0 +1,242 @@
|
|||
// SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
|
||||
/* Copyright (c) 2019 Netronome Systems, Inc. */
|
||||
|
||||
#include <errno.h>
|
||||
#include <fcntl.h>
|
||||
#include <string.h>
|
||||
#include <stdlib.h>
|
||||
#include <unistd.h>
|
||||
#include <net/if.h>
|
||||
#include <sys/utsname.h>
|
||||
|
||||
#include <linux/filter.h>
|
||||
#include <linux/kernel.h>
|
||||
|
||||
#include "bpf.h"
|
||||
#include "libbpf.h"
|
||||
|
||||
static bool grep(const char *buffer, const char *pattern)
|
||||
{
|
||||
return !!strstr(buffer, pattern);
|
||||
}
|
||||
|
||||
static int get_vendor_id(int ifindex)
|
||||
{
|
||||
char ifname[IF_NAMESIZE], path[64], buf[8];
|
||||
ssize_t len;
|
||||
int fd;
|
||||
|
||||
if (!if_indextoname(ifindex, ifname))
|
||||
return -1;
|
||||
|
||||
snprintf(path, sizeof(path), "/sys/class/net/%s/device/vendor", ifname);
|
||||
|
||||
fd = open(path, O_RDONLY);
|
||||
if (fd < 0)
|
||||
return -1;
|
||||
|
||||
len = read(fd, buf, sizeof(buf));
|
||||
close(fd);
|
||||
if (len < 0)
|
||||
return -1;
|
||||
if (len >= (ssize_t)sizeof(buf))
|
||||
return -1;
|
||||
buf[len] = '\0';
|
||||
|
||||
return strtol(buf, NULL, 0);
|
||||
}
|
||||
|
||||
static int get_kernel_version(void)
|
||||
{
|
||||
int version, subversion, patchlevel;
|
||||
struct utsname utsn;
|
||||
|
||||
/* Return 0 on failure, and attempt to probe with empty kversion */
|
||||
if (uname(&utsn))
|
||||
return 0;
|
||||
|
||||
if (sscanf(utsn.release, "%d.%d.%d",
|
||||
&version, &subversion, &patchlevel) != 3)
|
||||
return 0;
|
||||
|
||||
return (version << 16) + (subversion << 8) + patchlevel;
|
||||
}
|
||||
|
||||
static void
|
||||
probe_load(enum bpf_prog_type prog_type, const struct bpf_insn *insns,
|
||||
size_t insns_cnt, char *buf, size_t buf_len, __u32 ifindex)
|
||||
{
|
||||
struct bpf_load_program_attr xattr = {};
|
||||
int fd;
|
||||
|
||||
switch (prog_type) {
|
||||
case BPF_PROG_TYPE_CGROUP_SOCK_ADDR:
|
||||
xattr.expected_attach_type = BPF_CGROUP_INET4_CONNECT;
|
||||
break;
|
||||
case BPF_PROG_TYPE_KPROBE:
|
||||
xattr.kern_version = get_kernel_version();
|
||||
break;
|
||||
case BPF_PROG_TYPE_UNSPEC:
|
||||
case BPF_PROG_TYPE_SOCKET_FILTER:
|
||||
case BPF_PROG_TYPE_SCHED_CLS:
|
||||
case BPF_PROG_TYPE_SCHED_ACT:
|
||||
case BPF_PROG_TYPE_TRACEPOINT:
|
||||
case BPF_PROG_TYPE_XDP:
|
||||
case BPF_PROG_TYPE_PERF_EVENT:
|
||||
case BPF_PROG_TYPE_CGROUP_SKB:
|
||||
case BPF_PROG_TYPE_CGROUP_SOCK:
|
||||
case BPF_PROG_TYPE_LWT_IN:
|
||||
case BPF_PROG_TYPE_LWT_OUT:
|
||||
case BPF_PROG_TYPE_LWT_XMIT:
|
||||
case BPF_PROG_TYPE_SOCK_OPS:
|
||||
case BPF_PROG_TYPE_SK_SKB:
|
||||
case BPF_PROG_TYPE_CGROUP_DEVICE:
|
||||
case BPF_PROG_TYPE_SK_MSG:
|
||||
case BPF_PROG_TYPE_RAW_TRACEPOINT:
|
||||
case BPF_PROG_TYPE_LWT_SEG6LOCAL:
|
||||
case BPF_PROG_TYPE_LIRC_MODE2:
|
||||
case BPF_PROG_TYPE_SK_REUSEPORT:
|
||||
case BPF_PROG_TYPE_FLOW_DISSECTOR:
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
xattr.prog_type = prog_type;
|
||||
xattr.insns = insns;
|
||||
xattr.insns_cnt = insns_cnt;
|
||||
xattr.license = "GPL";
|
||||
xattr.prog_ifindex = ifindex;
|
||||
|
||||
fd = bpf_load_program_xattr(&xattr, buf, buf_len);
|
||||
if (fd >= 0)
|
||||
close(fd);
|
||||
}
|
||||
|
||||
bool bpf_probe_prog_type(enum bpf_prog_type prog_type, __u32 ifindex)
|
||||
{
|
||||
struct bpf_insn insns[2] = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN()
|
||||
};
|
||||
|
||||
if (ifindex && prog_type == BPF_PROG_TYPE_SCHED_CLS)
|
||||
/* nfp returns -EINVAL on exit(0) with TC offload */
|
||||
insns[0].imm = 2;
|
||||
|
||||
errno = 0;
|
||||
probe_load(prog_type, insns, ARRAY_SIZE(insns), NULL, 0, ifindex);
|
||||
|
||||
return errno != EINVAL && errno != EOPNOTSUPP;
|
||||
}
|
||||
|
||||
bool bpf_probe_map_type(enum bpf_map_type map_type, __u32 ifindex)
|
||||
{
|
||||
int key_size, value_size, max_entries, map_flags;
|
||||
struct bpf_create_map_attr attr = {};
|
||||
int fd = -1, fd_inner;
|
||||
|
||||
key_size = sizeof(__u32);
|
||||
value_size = sizeof(__u32);
|
||||
max_entries = 1;
|
||||
map_flags = 0;
|
||||
|
||||
switch (map_type) {
|
||||
case BPF_MAP_TYPE_STACK_TRACE:
|
||||
value_size = sizeof(__u64);
|
||||
break;
|
||||
case BPF_MAP_TYPE_LPM_TRIE:
|
||||
key_size = sizeof(__u64);
|
||||
value_size = sizeof(__u64);
|
||||
map_flags = BPF_F_NO_PREALLOC;
|
||||
break;
|
||||
case BPF_MAP_TYPE_CGROUP_STORAGE:
|
||||
case BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE:
|
||||
key_size = sizeof(struct bpf_cgroup_storage_key);
|
||||
value_size = sizeof(__u64);
|
||||
max_entries = 0;
|
||||
break;
|
||||
case BPF_MAP_TYPE_QUEUE:
|
||||
case BPF_MAP_TYPE_STACK:
|
||||
key_size = 0;
|
||||
break;
|
||||
case BPF_MAP_TYPE_UNSPEC:
|
||||
case BPF_MAP_TYPE_HASH:
|
||||
case BPF_MAP_TYPE_ARRAY:
|
||||
case BPF_MAP_TYPE_PROG_ARRAY:
|
||||
case BPF_MAP_TYPE_PERF_EVENT_ARRAY:
|
||||
case BPF_MAP_TYPE_PERCPU_HASH:
|
||||
case BPF_MAP_TYPE_PERCPU_ARRAY:
|
||||
case BPF_MAP_TYPE_CGROUP_ARRAY:
|
||||
case BPF_MAP_TYPE_LRU_HASH:
|
||||
case BPF_MAP_TYPE_LRU_PERCPU_HASH:
|
||||
case BPF_MAP_TYPE_ARRAY_OF_MAPS:
|
||||
case BPF_MAP_TYPE_HASH_OF_MAPS:
|
||||
case BPF_MAP_TYPE_DEVMAP:
|
||||
case BPF_MAP_TYPE_SOCKMAP:
|
||||
case BPF_MAP_TYPE_CPUMAP:
|
||||
case BPF_MAP_TYPE_XSKMAP:
|
||||
case BPF_MAP_TYPE_SOCKHASH:
|
||||
case BPF_MAP_TYPE_REUSEPORT_SOCKARRAY:
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
if (map_type == BPF_MAP_TYPE_ARRAY_OF_MAPS ||
|
||||
map_type == BPF_MAP_TYPE_HASH_OF_MAPS) {
|
||||
/* TODO: probe for device, once libbpf has a function to create
|
||||
* map-in-map for offload
|
||||
*/
|
||||
if (ifindex)
|
||||
return false;
|
||||
|
||||
fd_inner = bpf_create_map(BPF_MAP_TYPE_HASH,
|
||||
sizeof(__u32), sizeof(__u32), 1, 0);
|
||||
if (fd_inner < 0)
|
||||
return false;
|
||||
fd = bpf_create_map_in_map(map_type, NULL, sizeof(__u32),
|
||||
fd_inner, 1, 0);
|
||||
close(fd_inner);
|
||||
} else {
|
||||
/* Note: No other restriction on map type probes for offload */
|
||||
attr.map_type = map_type;
|
||||
attr.key_size = key_size;
|
||||
attr.value_size = value_size;
|
||||
attr.max_entries = max_entries;
|
||||
attr.map_flags = map_flags;
|
||||
attr.map_ifindex = ifindex;
|
||||
|
||||
fd = bpf_create_map_xattr(&attr);
|
||||
}
|
||||
if (fd >= 0)
|
||||
close(fd);
|
||||
|
||||
return fd >= 0;
|
||||
}
|
||||
|
||||
bool bpf_probe_helper(enum bpf_func_id id, enum bpf_prog_type prog_type,
|
||||
__u32 ifindex)
|
||||
{
|
||||
struct bpf_insn insns[2] = {
|
||||
BPF_EMIT_CALL(id),
|
||||
BPF_EXIT_INSN()
|
||||
};
|
||||
char buf[4096] = {};
|
||||
bool res;
|
||||
|
||||
probe_load(prog_type, insns, ARRAY_SIZE(insns), buf, sizeof(buf),
|
||||
ifindex);
|
||||
res = !grep(buf, "invalid func ") && !grep(buf, "unknown func ");
|
||||
|
||||
if (ifindex) {
|
||||
switch (get_vendor_id(ifindex)) {
|
||||
case 0x19ee: /* Netronome specific */
|
||||
res = res && !grep(buf, "not supported by FW") &&
|
||||
!grep(buf, "unsupported function id");
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return res;
|
||||
}
|
|
@ -10,15 +10,14 @@ ifneq ($(wildcard $(GENHDR)),)
|
|||
GENFLAGS := -DHAVE_GENHDR
|
||||
endif
|
||||
|
||||
CLANG ?= clang
|
||||
LLC ?= llc
|
||||
LLVM_OBJCOPY ?= llvm-objcopy
|
||||
LLVM_READELF ?= llvm-readelf
|
||||
BTF_PAHOLE ?= pahole
|
||||
CFLAGS += -Wall -O2 -I$(APIDIR) -I$(LIBDIR) -I$(BPFDIR) -I$(GENDIR) $(GENFLAGS) -I../../../include
|
||||
LDLIBS += -lcap -lelf -lrt -lpthread
|
||||
|
||||
TEST_CUSTOM_PROGS = $(OUTPUT)/urandom_read
|
||||
all: $(TEST_CUSTOM_PROGS)
|
||||
|
||||
$(TEST_CUSTOM_PROGS): $(OUTPUT)/%: %.c
|
||||
$(CC) -o $(TEST_CUSTOM_PROGS) -static $< -Wl,--build-id
|
||||
|
||||
# Order correspond to 'make run_tests' order
|
||||
TEST_GEN_PROGS = test_verifier test_tag test_maps test_lru_map test_lpm_map test_progs \
|
||||
test_align test_verifier_log test_dev_cgroup test_tcpbpf_user \
|
||||
|
@ -26,21 +25,42 @@ TEST_GEN_PROGS = test_verifier test_tag test_maps test_lru_map test_lpm_map test
|
|||
test_socket_cookie test_cgroup_storage test_select_reuseport test_section_names \
|
||||
test_netcnt test_tcpnotify_user
|
||||
|
||||
TEST_GEN_FILES = test_pkt_access.o test_xdp.o test_l4lb.o test_tcp_estats.o test_obj_id.o \
|
||||
test_pkt_md_access.o test_xdp_redirect.o test_xdp_meta.o sockmap_parse_prog.o \
|
||||
sockmap_verdict_prog.o dev_cgroup.o sample_ret0.o test_tracepoint.o \
|
||||
test_l4lb_noinline.o test_xdp_noinline.o test_stacktrace_map.o \
|
||||
test_tcpnotify_kern.o \
|
||||
sample_map_ret0.o test_tcpbpf_kern.o test_stacktrace_build_id.o \
|
||||
sockmap_tcp_msg_prog.o connect4_prog.o connect6_prog.o test_adjust_tail.o \
|
||||
test_btf_haskv.o test_btf_nokv.o test_sockmap_kern.o test_tunnel_kern.o \
|
||||
test_get_stack_rawtp.o test_sockmap_kern.o test_sockhash_kern.o \
|
||||
test_lwt_seg6local.o sendmsg4_prog.o sendmsg6_prog.o test_lirc_mode2_kern.o \
|
||||
BPF_OBJ_FILES = \
|
||||
test_xdp_redirect.o test_xdp_meta.o sockmap_parse_prog.o \
|
||||
sockmap_verdict_prog.o dev_cgroup.o sample_ret0.o \
|
||||
test_tcpnotify_kern.o sample_map_ret0.o test_tcpbpf_kern.o \
|
||||
sockmap_tcp_msg_prog.o connect4_prog.o connect6_prog.o \
|
||||
test_btf_haskv.o test_btf_nokv.o test_sockmap_kern.o \
|
||||
test_tunnel_kern.o test_sockhash_kern.o test_lwt_seg6local.o \
|
||||
sendmsg4_prog.o sendmsg6_prog.o test_lirc_mode2_kern.o \
|
||||
get_cgroup_id_kern.o socket_cookie_prog.o test_select_reuseport_kern.o \
|
||||
test_skb_cgroup_id_kern.o bpf_flow.o netcnt_prog.o \
|
||||
test_sk_lookup_kern.o test_xdp_vlan.o test_queue_map.o test_stack_map.o \
|
||||
test_skb_cgroup_id_kern.o bpf_flow.o netcnt_prog.o test_xdp_vlan.o \
|
||||
xdp_dummy.o test_map_in_map.o
|
||||
|
||||
# Objects are built with default compilation flags and with sub-register
|
||||
# code-gen enabled.
|
||||
BPF_OBJ_FILES_DUAL_COMPILE = \
|
||||
test_pkt_access.o test_pkt_access.o test_xdp.o test_adjust_tail.o \
|
||||
test_l4lb.o test_l4lb_noinline.o test_xdp_noinline.o test_tcp_estats.o \
|
||||
test_obj_id.o test_pkt_md_access.o test_tracepoint.o \
|
||||
test_stacktrace_map.o test_stacktrace_map.o test_stacktrace_build_id.o \
|
||||
test_stacktrace_build_id.o test_get_stack_rawtp.o \
|
||||
test_get_stack_rawtp.o test_tracepoint.o test_sk_lookup_kern.o \
|
||||
test_queue_map.o test_stack_map.o
|
||||
|
||||
TEST_GEN_FILES = $(BPF_OBJ_FILES) $(BPF_OBJ_FILES_DUAL_COMPILE)
|
||||
|
||||
# Also test sub-register code-gen if LLVM + kernel both has eBPF v3 processor
|
||||
# support which is the first version to contain both ALU32 and JMP32
|
||||
# instructions.
|
||||
SUBREG_CODEGEN := $(shell echo "int cal(int a) { return a > 0; }" | \
|
||||
$(CLANG) -target bpf -O2 -emit-llvm -S -x c - -o - | \
|
||||
$(LLC) -mattr=+alu32 -mcpu=probe 2>&1 | \
|
||||
grep 'if w')
|
||||
ifneq ($(SUBREG_CODEGEN),)
|
||||
TEST_GEN_FILES += $(patsubst %.o,alu32/%.o, $(BPF_OBJ_FILES_DUAL_COMPILE))
|
||||
endif
|
||||
|
||||
# Order correspond to 'make run_tests' order
|
||||
TEST_PROGS := test_kmod.sh \
|
||||
test_libbpf.sh \
|
||||
|
@ -66,6 +86,13 @@ TEST_GEN_PROGS_EXTENDED = test_libbpf_open test_sock_addr test_skb_cgroup_id_use
|
|||
|
||||
include ../lib.mk
|
||||
|
||||
# NOTE: $(OUTPUT) won't get default value if used before lib.mk
|
||||
TEST_CUSTOM_PROGS = $(OUTPUT)/urandom_read
|
||||
all: $(TEST_CUSTOM_PROGS)
|
||||
|
||||
$(OUTPUT)/urandom_read: $(OUTPUT)/%: %.c
|
||||
$(CC) -o $@ -static $< -Wl,--build-id
|
||||
|
||||
BPFOBJ := $(OUTPUT)/libbpf.a
|
||||
|
||||
$(TEST_GEN_PROGS): $(BPFOBJ)
|
||||
|
@ -93,11 +120,6 @@ force:
|
|||
$(BPFOBJ): force
|
||||
$(MAKE) -C $(BPFDIR) OUTPUT=$(OUTPUT)/
|
||||
|
||||
CLANG ?= clang
|
||||
LLC ?= llc
|
||||
LLVM_OBJCOPY ?= llvm-objcopy
|
||||
BTF_PAHOLE ?= pahole
|
||||
|
||||
PROBE := $(shell $(LLC) -march=bpf -mcpu=probe -filetype=null /dev/null 2>&1)
|
||||
|
||||
# Let newer LLVM versions transparently probe the kernel for availability
|
||||
|
@ -127,12 +149,15 @@ $(OUTPUT)/test_xdp_noinline.o: CLANG_FLAGS += -fno-inline
|
|||
$(OUTPUT)/test_queue_map.o: test_queue_stack_map.h
|
||||
$(OUTPUT)/test_stack_map.o: test_queue_stack_map.h
|
||||
|
||||
$(OUTPUT)/flow_dissector_load.o: flow_dissector_load.h
|
||||
$(OUTPUT)/test_progs.o: flow_dissector_load.h
|
||||
|
||||
BTF_LLC_PROBE := $(shell $(LLC) -march=bpf -mattr=help 2>&1 | grep dwarfris)
|
||||
BTF_PAHOLE_PROBE := $(shell $(BTF_PAHOLE) --help 2>&1 | grep BTF)
|
||||
BTF_OBJCOPY_PROBE := $(shell $(LLVM_OBJCOPY) --help 2>&1 | grep -i 'usage.*llvm')
|
||||
BTF_LLVM_PROBE := $(shell echo "int main() { return 0; }" | \
|
||||
$(CLANG) -target bpf -O2 -g -c -x c - -o ./llvm_btf_verify.o; \
|
||||
readelf -S ./llvm_btf_verify.o | grep BTF; \
|
||||
$(LLVM_READELF) -S ./llvm_btf_verify.o | grep BTF; \
|
||||
/bin/rm -f ./llvm_btf_verify.o)
|
||||
|
||||
ifneq ($(BTF_LLVM_PROBE),)
|
||||
|
@ -149,6 +174,30 @@ endif
|
|||
endif
|
||||
endif
|
||||
|
||||
ifneq ($(SUBREG_CODEGEN),)
|
||||
ALU32_BUILD_DIR = $(OUTPUT)/alu32
|
||||
TEST_CUSTOM_PROGS += $(ALU32_BUILD_DIR)/test_progs_32
|
||||
$(ALU32_BUILD_DIR):
|
||||
mkdir -p $@
|
||||
|
||||
$(ALU32_BUILD_DIR)/urandom_read: $(OUTPUT)/urandom_read
|
||||
cp $< $@
|
||||
|
||||
$(ALU32_BUILD_DIR)/test_progs_32: test_progs.c $(ALU32_BUILD_DIR) \
|
||||
$(ALU32_BUILD_DIR)/urandom_read
|
||||
$(CC) $(CFLAGS) -o $(ALU32_BUILD_DIR)/test_progs_32 $< \
|
||||
trace_helpers.c $(OUTPUT)/libbpf.a $(LDLIBS)
|
||||
|
||||
$(ALU32_BUILD_DIR)/%.o: %.c $(ALU32_BUILD_DIR) $(ALU32_BUILD_DIR)/test_progs_32
|
||||
$(CLANG) $(CLANG_FLAGS) \
|
||||
-O2 -target bpf -emit-llvm -c $< -o - | \
|
||||
$(LLC) -march=bpf -mattr=+alu32 -mcpu=$(CPU) $(LLC_FLAGS) \
|
||||
-filetype=obj -o $@
|
||||
ifeq ($(DWARF2BTF),y)
|
||||
$(BTF_PAHOLE) -J $@
|
||||
endif
|
||||
endif
|
||||
|
||||
# Have one program compiled without "-target bpf" to test whether libbpf loads
|
||||
# it successfully
|
||||
$(OUTPUT)/test_xdp.o: test_xdp.c
|
||||
|
@ -167,4 +216,17 @@ ifeq ($(DWARF2BTF),y)
|
|||
$(BTF_PAHOLE) -J $@
|
||||
endif
|
||||
|
||||
EXTRA_CLEAN := $(TEST_CUSTOM_PROGS)
|
||||
$(OUTPUT)/test_verifier: $(OUTPUT)/verifier/tests.h
|
||||
$(OUTPUT)/test_verifier: CFLAGS += -I$(OUTPUT)
|
||||
|
||||
VERIFIER_TEST_FILES := $(wildcard verifier/*.c)
|
||||
$(OUTPUT)/verifier/tests.h: $(VERIFIER_TEST_FILES)
|
||||
$(shell ( cd verifier/
|
||||
echo '/* Generated header, do not edit */'; \
|
||||
echo '#ifdef FILL_ARRAY'; \
|
||||
ls *.c 2> /dev/null | \
|
||||
sed -e 's@\(.*\)@#include \"\1\"@'; \
|
||||
echo '#endif' \
|
||||
) > $(OUTPUT)/verifier/tests.h)
|
||||
|
||||
EXTRA_CLEAN := $(TEST_CUSTOM_PROGS) $(ALU32_BUILD_DIR)
|
||||
|
|
|
@ -12,6 +12,7 @@
|
|||
#include <bpf/libbpf.h>
|
||||
|
||||
#include "bpf_rlimit.h"
|
||||
#include "flow_dissector_load.h"
|
||||
|
||||
const char *cfg_pin_path = "/sys/fs/bpf/flow_dissector";
|
||||
const char *cfg_map_name = "jmp_table";
|
||||
|
@ -21,46 +22,13 @@ char *cfg_path_name;
|
|||
|
||||
static void load_and_attach_program(void)
|
||||
{
|
||||
struct bpf_program *prog, *main_prog;
|
||||
struct bpf_map *prog_array;
|
||||
int i, fd, prog_fd, ret;
|
||||
int prog_fd, ret;
|
||||
struct bpf_object *obj;
|
||||
int prog_array_fd;
|
||||
|
||||
ret = bpf_prog_load(cfg_path_name, BPF_PROG_TYPE_FLOW_DISSECTOR, &obj,
|
||||
&prog_fd);
|
||||
ret = bpf_flow_load(&obj, cfg_path_name, cfg_section_name,
|
||||
cfg_map_name, &prog_fd);
|
||||
if (ret)
|
||||
error(1, 0, "bpf_prog_load %s", cfg_path_name);
|
||||
|
||||
main_prog = bpf_object__find_program_by_title(obj, cfg_section_name);
|
||||
if (!main_prog)
|
||||
error(1, 0, "bpf_object__find_program_by_title %s",
|
||||
cfg_section_name);
|
||||
|
||||
prog_fd = bpf_program__fd(main_prog);
|
||||
if (prog_fd < 0)
|
||||
error(1, 0, "bpf_program__fd");
|
||||
|
||||
prog_array = bpf_object__find_map_by_name(obj, cfg_map_name);
|
||||
if (!prog_array)
|
||||
error(1, 0, "bpf_object__find_map_by_name %s", cfg_map_name);
|
||||
|
||||
prog_array_fd = bpf_map__fd(prog_array);
|
||||
if (prog_array_fd < 0)
|
||||
error(1, 0, "bpf_map__fd %s", cfg_map_name);
|
||||
|
||||
i = 0;
|
||||
bpf_object__for_each_program(prog, obj) {
|
||||
fd = bpf_program__fd(prog);
|
||||
if (fd < 0)
|
||||
error(1, 0, "bpf_program__fd");
|
||||
|
||||
if (fd != prog_fd) {
|
||||
printf("%d: %s\n", i, bpf_program__title(prog, false));
|
||||
bpf_map_update_elem(prog_array_fd, &i, &fd, BPF_ANY);
|
||||
++i;
|
||||
}
|
||||
}
|
||||
error(1, 0, "bpf_flow_load %s", cfg_path_name);
|
||||
|
||||
ret = bpf_prog_attach(prog_fd, 0 /* Ignore */, BPF_FLOW_DISSECTOR, 0);
|
||||
if (ret)
|
||||
|
@ -69,7 +37,6 @@ static void load_and_attach_program(void)
|
|||
ret = bpf_object__pin(obj, cfg_pin_path);
|
||||
if (ret)
|
||||
error(1, 0, "bpf_object__pin %s", cfg_pin_path);
|
||||
|
||||
}
|
||||
|
||||
static void detach_program(void)
|
||||
|
|
|
@ -0,0 +1,55 @@
|
|||
/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) */
|
||||
#ifndef FLOW_DISSECTOR_LOAD
|
||||
#define FLOW_DISSECTOR_LOAD
|
||||
|
||||
#include <bpf/bpf.h>
|
||||
#include <bpf/libbpf.h>
|
||||
|
||||
static inline int bpf_flow_load(struct bpf_object **obj,
|
||||
const char *path,
|
||||
const char *section_name,
|
||||
const char *map_name,
|
||||
int *prog_fd)
|
||||
{
|
||||
struct bpf_program *prog, *main_prog;
|
||||
struct bpf_map *prog_array;
|
||||
int prog_array_fd;
|
||||
int ret, fd, i;
|
||||
|
||||
ret = bpf_prog_load(path, BPF_PROG_TYPE_FLOW_DISSECTOR, obj,
|
||||
prog_fd);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
main_prog = bpf_object__find_program_by_title(*obj, section_name);
|
||||
if (!main_prog)
|
||||
return ret;
|
||||
|
||||
*prog_fd = bpf_program__fd(main_prog);
|
||||
if (*prog_fd < 0)
|
||||
return ret;
|
||||
|
||||
prog_array = bpf_object__find_map_by_name(*obj, map_name);
|
||||
if (!prog_array)
|
||||
return ret;
|
||||
|
||||
prog_array_fd = bpf_map__fd(prog_array);
|
||||
if (prog_array_fd < 0)
|
||||
return ret;
|
||||
|
||||
i = 0;
|
||||
bpf_object__for_each_program(prog, *obj) {
|
||||
fd = bpf_program__fd(prog);
|
||||
if (fd < 0)
|
||||
return fd;
|
||||
|
||||
if (fd != *prog_fd) {
|
||||
bpf_map_update_elem(prog_array_fd, &i, &fd, BPF_ANY);
|
||||
++i;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
#endif /* FLOW_DISSECTOR_LOAD */
|
|
@ -18,6 +18,7 @@
|
|||
#include <unistd.h>
|
||||
#include <fcntl.h>
|
||||
#include <errno.h>
|
||||
#include <assert.h>
|
||||
#include <bpf/libbpf.h>
|
||||
#include <bpf/btf.h>
|
||||
|
||||
|
@ -134,6 +135,12 @@ static struct btf_header hdr_tmpl = {
|
|||
.hdr_len = sizeof(struct btf_header),
|
||||
};
|
||||
|
||||
/* several different mapv kinds(types) supported by pprint */
|
||||
enum pprint_mapv_kind_t {
|
||||
PPRINT_MAPV_KIND_BASIC = 0,
|
||||
PPRINT_MAPV_KIND_INT128,
|
||||
};
|
||||
|
||||
struct btf_raw_test {
|
||||
const char *descr;
|
||||
const char *str_sec;
|
||||
|
@ -156,6 +163,7 @@ struct btf_raw_test {
|
|||
int type_off_delta;
|
||||
int str_off_delta;
|
||||
int str_len_delta;
|
||||
enum pprint_mapv_kind_t mapv_kind;
|
||||
};
|
||||
|
||||
#define BTF_STR_SEC(str) \
|
||||
|
@ -2707,6 +2715,99 @@ static struct btf_raw_test raw_tests[] = {
|
|||
.err_str = "Invalid member offset",
|
||||
},
|
||||
|
||||
{
|
||||
.descr = "128-bit int",
|
||||
.raw_types = {
|
||||
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
|
||||
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 128, 16), /* [2] */
|
||||
BTF_END_RAW,
|
||||
},
|
||||
BTF_STR_SEC("\0A"),
|
||||
.map_type = BPF_MAP_TYPE_ARRAY,
|
||||
.map_name = "int_type_check_btf",
|
||||
.key_size = sizeof(int),
|
||||
.value_size = sizeof(int),
|
||||
.key_type_id = 1,
|
||||
.value_type_id = 1,
|
||||
.max_entries = 4,
|
||||
},
|
||||
|
||||
{
|
||||
.descr = "struct, 128-bit int member",
|
||||
.raw_types = {
|
||||
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
|
||||
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 128, 16), /* [2] */
|
||||
BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_STRUCT, 0, 1), 16), /* [3] */
|
||||
BTF_MEMBER_ENC(NAME_TBD, 2, 0),
|
||||
BTF_END_RAW,
|
||||
},
|
||||
BTF_STR_SEC("\0A"),
|
||||
.map_type = BPF_MAP_TYPE_ARRAY,
|
||||
.map_name = "struct_type_check_btf",
|
||||
.key_size = sizeof(int),
|
||||
.value_size = sizeof(int),
|
||||
.key_type_id = 1,
|
||||
.value_type_id = 1,
|
||||
.max_entries = 4,
|
||||
},
|
||||
|
||||
{
|
||||
.descr = "struct, 120-bit int member bitfield",
|
||||
.raw_types = {
|
||||
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
|
||||
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 120, 16), /* [2] */
|
||||
BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_STRUCT, 0, 1), 16), /* [3] */
|
||||
BTF_MEMBER_ENC(NAME_TBD, 2, 0),
|
||||
BTF_END_RAW,
|
||||
},
|
||||
BTF_STR_SEC("\0A"),
|
||||
.map_type = BPF_MAP_TYPE_ARRAY,
|
||||
.map_name = "struct_type_check_btf",
|
||||
.key_size = sizeof(int),
|
||||
.value_size = sizeof(int),
|
||||
.key_type_id = 1,
|
||||
.value_type_id = 1,
|
||||
.max_entries = 4,
|
||||
},
|
||||
|
||||
{
|
||||
.descr = "struct, kind_flag, 128-bit int member",
|
||||
.raw_types = {
|
||||
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
|
||||
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 128, 16), /* [2] */
|
||||
BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_STRUCT, 1, 1), 16), /* [3] */
|
||||
BTF_MEMBER_ENC(NAME_TBD, 2, BTF_MEMBER_OFFSET(0, 0)),
|
||||
BTF_END_RAW,
|
||||
},
|
||||
BTF_STR_SEC("\0A"),
|
||||
.map_type = BPF_MAP_TYPE_ARRAY,
|
||||
.map_name = "struct_type_check_btf",
|
||||
.key_size = sizeof(int),
|
||||
.value_size = sizeof(int),
|
||||
.key_type_id = 1,
|
||||
.value_type_id = 1,
|
||||
.max_entries = 4,
|
||||
},
|
||||
|
||||
{
|
||||
.descr = "struct, kind_flag, 120-bit int member bitfield",
|
||||
.raw_types = {
|
||||
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
|
||||
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 128, 16), /* [2] */
|
||||
BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_STRUCT, 1, 1), 16), /* [3] */
|
||||
BTF_MEMBER_ENC(NAME_TBD, 2, BTF_MEMBER_OFFSET(120, 0)),
|
||||
BTF_END_RAW,
|
||||
},
|
||||
BTF_STR_SEC("\0A"),
|
||||
.map_type = BPF_MAP_TYPE_ARRAY,
|
||||
.map_name = "struct_type_check_btf",
|
||||
.key_size = sizeof(int),
|
||||
.value_size = sizeof(int),
|
||||
.key_type_id = 1,
|
||||
.value_type_id = 1,
|
||||
.max_entries = 4,
|
||||
},
|
||||
|
||||
}; /* struct btf_raw_test raw_tests[] */
|
||||
|
||||
static const char *get_next_str(const char *start, const char *end)
|
||||
|
@ -3530,6 +3631,16 @@ struct pprint_mapv {
|
|||
uint32_t bits2c:2;
|
||||
};
|
||||
|
||||
#ifdef __SIZEOF_INT128__
|
||||
struct pprint_mapv_int128 {
|
||||
__int128 si128a;
|
||||
__int128 si128b;
|
||||
unsigned __int128 bits3:3;
|
||||
unsigned __int128 bits80:80;
|
||||
unsigned __int128 ui128;
|
||||
};
|
||||
#endif
|
||||
|
||||
static struct btf_raw_test pprint_test_template[] = {
|
||||
{
|
||||
.raw_types = {
|
||||
|
@ -3721,6 +3832,35 @@ static struct btf_raw_test pprint_test_template[] = {
|
|||
.max_entries = 128 * 1024,
|
||||
},
|
||||
|
||||
#ifdef __SIZEOF_INT128__
|
||||
{
|
||||
/* test int128 */
|
||||
.raw_types = {
|
||||
/* unsigned int */ /* [1] */
|
||||
BTF_TYPE_INT_ENC(NAME_TBD, 0, 0, 32, 4),
|
||||
/* __int128 */ /* [2] */
|
||||
BTF_TYPE_INT_ENC(NAME_TBD, BTF_INT_SIGNED, 0, 128, 16),
|
||||
/* unsigned __int128 */ /* [3] */
|
||||
BTF_TYPE_INT_ENC(NAME_TBD, 0, 0, 128, 16),
|
||||
/* struct pprint_mapv_int128 */ /* [4] */
|
||||
BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_STRUCT, 1, 5), 64),
|
||||
BTF_MEMBER_ENC(NAME_TBD, 2, BTF_MEMBER_OFFSET(0, 0)), /* si128a */
|
||||
BTF_MEMBER_ENC(NAME_TBD, 2, BTF_MEMBER_OFFSET(0, 128)), /* si128b */
|
||||
BTF_MEMBER_ENC(NAME_TBD, 3, BTF_MEMBER_OFFSET(3, 256)), /* bits3 */
|
||||
BTF_MEMBER_ENC(NAME_TBD, 3, BTF_MEMBER_OFFSET(80, 259)), /* bits80 */
|
||||
BTF_MEMBER_ENC(NAME_TBD, 3, BTF_MEMBER_OFFSET(0, 384)), /* ui128 */
|
||||
BTF_END_RAW,
|
||||
},
|
||||
BTF_STR_SEC("\0unsigned int\0__int128\0unsigned __int128\0pprint_mapv_int128\0si128a\0si128b\0bits3\0bits80\0ui128"),
|
||||
.key_size = sizeof(unsigned int),
|
||||
.value_size = sizeof(struct pprint_mapv_int128),
|
||||
.key_type_id = 1,
|
||||
.value_type_id = 4,
|
||||
.max_entries = 128 * 1024,
|
||||
.mapv_kind = PPRINT_MAPV_KIND_INT128,
|
||||
},
|
||||
#endif
|
||||
|
||||
};
|
||||
|
||||
static struct btf_pprint_test_meta {
|
||||
|
@ -3787,24 +3927,108 @@ static struct btf_pprint_test_meta {
|
|||
|
||||
};
|
||||
|
||||
static size_t get_pprint_mapv_size(enum pprint_mapv_kind_t mapv_kind)
|
||||
{
|
||||
if (mapv_kind == PPRINT_MAPV_KIND_BASIC)
|
||||
return sizeof(struct pprint_mapv);
|
||||
|
||||
static void set_pprint_mapv(struct pprint_mapv *v, uint32_t i,
|
||||
#ifdef __SIZEOF_INT128__
|
||||
if (mapv_kind == PPRINT_MAPV_KIND_INT128)
|
||||
return sizeof(struct pprint_mapv_int128);
|
||||
#endif
|
||||
|
||||
assert(0);
|
||||
}
|
||||
|
||||
static void set_pprint_mapv(enum pprint_mapv_kind_t mapv_kind,
|
||||
void *mapv, uint32_t i,
|
||||
int num_cpus, int rounded_value_size)
|
||||
{
|
||||
int cpu;
|
||||
|
||||
for (cpu = 0; cpu < num_cpus; cpu++) {
|
||||
v->ui32 = i + cpu;
|
||||
v->si32 = -i;
|
||||
v->unused_bits2a = 3;
|
||||
v->bits28 = i;
|
||||
v->unused_bits2b = 3;
|
||||
v->ui64 = i;
|
||||
v->aenum = i & 0x03;
|
||||
v->ui32b = 4;
|
||||
v->bits2c = 1;
|
||||
v = (void *)v + rounded_value_size;
|
||||
if (mapv_kind == PPRINT_MAPV_KIND_BASIC) {
|
||||
struct pprint_mapv *v = mapv;
|
||||
|
||||
for (cpu = 0; cpu < num_cpus; cpu++) {
|
||||
v->ui32 = i + cpu;
|
||||
v->si32 = -i;
|
||||
v->unused_bits2a = 3;
|
||||
v->bits28 = i;
|
||||
v->unused_bits2b = 3;
|
||||
v->ui64 = i;
|
||||
v->aenum = i & 0x03;
|
||||
v->ui32b = 4;
|
||||
v->bits2c = 1;
|
||||
v = (void *)v + rounded_value_size;
|
||||
}
|
||||
}
|
||||
|
||||
#ifdef __SIZEOF_INT128__
|
||||
if (mapv_kind == PPRINT_MAPV_KIND_INT128) {
|
||||
struct pprint_mapv_int128 *v = mapv;
|
||||
|
||||
for (cpu = 0; cpu < num_cpus; cpu++) {
|
||||
v->si128a = i;
|
||||
v->si128b = -i;
|
||||
v->bits3 = i & 0x07;
|
||||
v->bits80 = (((unsigned __int128)1) << 64) + i;
|
||||
v->ui128 = (((unsigned __int128)2) << 64) + i;
|
||||
v = (void *)v + rounded_value_size;
|
||||
}
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
ssize_t get_pprint_expected_line(enum pprint_mapv_kind_t mapv_kind,
|
||||
char *expected_line, ssize_t line_size,
|
||||
bool percpu_map, unsigned int next_key,
|
||||
int cpu, void *mapv)
|
||||
{
|
||||
ssize_t nexpected_line = -1;
|
||||
|
||||
if (mapv_kind == PPRINT_MAPV_KIND_BASIC) {
|
||||
struct pprint_mapv *v = mapv;
|
||||
|
||||
nexpected_line = snprintf(expected_line, line_size,
|
||||
"%s%u: {%u,0,%d,0x%x,0x%x,0x%x,"
|
||||
"{%lu|[%u,%u,%u,%u,%u,%u,%u,%u]},%s,"
|
||||
"%u,0x%x}\n",
|
||||
percpu_map ? "\tcpu" : "",
|
||||
percpu_map ? cpu : next_key,
|
||||
v->ui32, v->si32,
|
||||
v->unused_bits2a,
|
||||
v->bits28,
|
||||
v->unused_bits2b,
|
||||
v->ui64,
|
||||
v->ui8a[0], v->ui8a[1],
|
||||
v->ui8a[2], v->ui8a[3],
|
||||
v->ui8a[4], v->ui8a[5],
|
||||
v->ui8a[6], v->ui8a[7],
|
||||
pprint_enum_str[v->aenum],
|
||||
v->ui32b,
|
||||
v->bits2c);
|
||||
}
|
||||
|
||||
#ifdef __SIZEOF_INT128__
|
||||
if (mapv_kind == PPRINT_MAPV_KIND_INT128) {
|
||||
struct pprint_mapv_int128 *v = mapv;
|
||||
|
||||
nexpected_line = snprintf(expected_line, line_size,
|
||||
"%s%u: {0x%lx,0x%lx,0x%lx,"
|
||||
"0x%lx%016lx,0x%lx%016lx}\n",
|
||||
percpu_map ? "\tcpu" : "",
|
||||
percpu_map ? cpu : next_key,
|
||||
(uint64_t)v->si128a,
|
||||
(uint64_t)v->si128b,
|
||||
(uint64_t)v->bits3,
|
||||
(uint64_t)(v->bits80 >> 64),
|
||||
(uint64_t)v->bits80,
|
||||
(uint64_t)(v->ui128 >> 64),
|
||||
(uint64_t)v->ui128);
|
||||
}
|
||||
#endif
|
||||
|
||||
return nexpected_line;
|
||||
}
|
||||
|
||||
static int check_line(const char *expected_line, int nexpected_line,
|
||||
|
@ -3828,10 +4052,10 @@ static int check_line(const char *expected_line, int nexpected_line,
|
|||
static int do_test_pprint(int test_num)
|
||||
{
|
||||
const struct btf_raw_test *test = &pprint_test_template[test_num];
|
||||
enum pprint_mapv_kind_t mapv_kind = test->mapv_kind;
|
||||
struct bpf_create_map_attr create_attr = {};
|
||||
bool ordered_map, lossless_map, percpu_map;
|
||||
int err, ret, num_cpus, rounded_value_size;
|
||||
struct pprint_mapv *mapv = NULL;
|
||||
unsigned int key, nr_read_elems;
|
||||
int map_fd = -1, btf_fd = -1;
|
||||
unsigned int raw_btf_size;
|
||||
|
@ -3840,6 +4064,7 @@ static int do_test_pprint(int test_num)
|
|||
char pin_path[255];
|
||||
size_t line_len = 0;
|
||||
char *line = NULL;
|
||||
void *mapv = NULL;
|
||||
uint8_t *raw_btf;
|
||||
ssize_t nread;
|
||||
|
||||
|
@ -3892,7 +4117,7 @@ static int do_test_pprint(int test_num)
|
|||
|
||||
percpu_map = test->percpu_map;
|
||||
num_cpus = percpu_map ? bpf_num_possible_cpus() : 1;
|
||||
rounded_value_size = round_up(sizeof(struct pprint_mapv), 8);
|
||||
rounded_value_size = round_up(get_pprint_mapv_size(mapv_kind), 8);
|
||||
mapv = calloc(num_cpus, rounded_value_size);
|
||||
if (CHECK(!mapv, "mapv allocation failure")) {
|
||||
err = -1;
|
||||
|
@ -3900,7 +4125,7 @@ static int do_test_pprint(int test_num)
|
|||
}
|
||||
|
||||
for (key = 0; key < test->max_entries; key++) {
|
||||
set_pprint_mapv(mapv, key, num_cpus, rounded_value_size);
|
||||
set_pprint_mapv(mapv_kind, mapv, key, num_cpus, rounded_value_size);
|
||||
bpf_map_update_elem(map_fd, &key, mapv, 0);
|
||||
}
|
||||
|
||||
|
@ -3924,13 +4149,13 @@ static int do_test_pprint(int test_num)
|
|||
ordered_map = test->ordered_map;
|
||||
lossless_map = test->lossless_map;
|
||||
do {
|
||||
struct pprint_mapv *cmapv;
|
||||
ssize_t nexpected_line;
|
||||
unsigned int next_key;
|
||||
void *cmapv;
|
||||
int cpu;
|
||||
|
||||
next_key = ordered_map ? nr_read_elems : atoi(line);
|
||||
set_pprint_mapv(mapv, next_key, num_cpus, rounded_value_size);
|
||||
set_pprint_mapv(mapv_kind, mapv, next_key, num_cpus, rounded_value_size);
|
||||
cmapv = mapv;
|
||||
|
||||
for (cpu = 0; cpu < num_cpus; cpu++) {
|
||||
|
@ -3963,31 +4188,16 @@ static int do_test_pprint(int test_num)
|
|||
break;
|
||||
}
|
||||
|
||||
nexpected_line = snprintf(expected_line, sizeof(expected_line),
|
||||
"%s%u: {%u,0,%d,0x%x,0x%x,0x%x,"
|
||||
"{%lu|[%u,%u,%u,%u,%u,%u,%u,%u]},%s,"
|
||||
"%u,0x%x}\n",
|
||||
percpu_map ? "\tcpu" : "",
|
||||
percpu_map ? cpu : next_key,
|
||||
cmapv->ui32, cmapv->si32,
|
||||
cmapv->unused_bits2a,
|
||||
cmapv->bits28,
|
||||
cmapv->unused_bits2b,
|
||||
cmapv->ui64,
|
||||
cmapv->ui8a[0], cmapv->ui8a[1],
|
||||
cmapv->ui8a[2], cmapv->ui8a[3],
|
||||
cmapv->ui8a[4], cmapv->ui8a[5],
|
||||
cmapv->ui8a[6], cmapv->ui8a[7],
|
||||
pprint_enum_str[cmapv->aenum],
|
||||
cmapv->ui32b,
|
||||
cmapv->bits2c);
|
||||
|
||||
nexpected_line = get_pprint_expected_line(mapv_kind, expected_line,
|
||||
sizeof(expected_line),
|
||||
percpu_map, next_key,
|
||||
cpu, cmapv);
|
||||
err = check_line(expected_line, nexpected_line,
|
||||
sizeof(expected_line), line);
|
||||
if (err == -1)
|
||||
goto done;
|
||||
|
||||
cmapv = (void *)cmapv + rounded_value_size;
|
||||
cmapv = cmapv + rounded_value_size;
|
||||
}
|
||||
|
||||
if (percpu_map) {
|
||||
|
@ -4083,6 +4293,10 @@ static struct prog_info_raw_test {
|
|||
__u32 line_info_rec_size;
|
||||
__u32 nr_jited_ksyms;
|
||||
bool expected_prog_load_failure;
|
||||
__u32 dead_code_cnt;
|
||||
__u32 dead_code_mask;
|
||||
__u32 dead_func_cnt;
|
||||
__u32 dead_func_mask;
|
||||
} info_raw_tests[] = {
|
||||
{
|
||||
.descr = "func_type (main func + one sub)",
|
||||
|
@ -4509,6 +4723,369 @@ static struct prog_info_raw_test {
|
|||
.expected_prog_load_failure = true,
|
||||
},
|
||||
|
||||
{
|
||||
.descr = "line_info (dead start)",
|
||||
.raw_types = {
|
||||
BTF_TYPE_INT_ENC(NAME_TBD, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
|
||||
BTF_END_RAW,
|
||||
},
|
||||
BTF_STR_SEC("\0int\0/* dead jmp */\0int a=1;\0int b=2;\0return a + b;\0return a + b;"),
|
||||
.insns = {
|
||||
BPF_JMP_IMM(BPF_JA, 0, 0, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 1),
|
||||
BPF_MOV64_IMM(BPF_REG_1, 2),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_TRACEPOINT,
|
||||
.func_info_cnt = 0,
|
||||
.line_info = {
|
||||
BPF_LINE_INFO_ENC(0, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(1, 0, NAME_TBD, 2, 9),
|
||||
BPF_LINE_INFO_ENC(2, 0, NAME_TBD, 3, 8),
|
||||
BPF_LINE_INFO_ENC(3, 0, NAME_TBD, 4, 7),
|
||||
BPF_LINE_INFO_ENC(4, 0, NAME_TBD, 5, 6),
|
||||
BTF_END_RAW,
|
||||
},
|
||||
.line_info_rec_size = sizeof(struct bpf_line_info),
|
||||
.nr_jited_ksyms = 1,
|
||||
.dead_code_cnt = 1,
|
||||
.dead_code_mask = 0x01,
|
||||
},
|
||||
|
||||
{
|
||||
.descr = "line_info (dead end)",
|
||||
.raw_types = {
|
||||
BTF_TYPE_INT_ENC(NAME_TBD, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
|
||||
BTF_END_RAW,
|
||||
},
|
||||
BTF_STR_SEC("\0int\0int a=1;\0int b=2;\0return a + b;\0/* dead jmp */\0return a + b;\0/* dead exit */"),
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 1),
|
||||
BPF_MOV64_IMM(BPF_REG_1, 2),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
|
||||
BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 10, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_TRACEPOINT,
|
||||
.func_info_cnt = 0,
|
||||
.line_info = {
|
||||
BPF_LINE_INFO_ENC(0, 0, NAME_TBD, 1, 12),
|
||||
BPF_LINE_INFO_ENC(1, 0, NAME_TBD, 2, 11),
|
||||
BPF_LINE_INFO_ENC(2, 0, NAME_TBD, 3, 10),
|
||||
BPF_LINE_INFO_ENC(3, 0, NAME_TBD, 4, 9),
|
||||
BPF_LINE_INFO_ENC(4, 0, NAME_TBD, 5, 8),
|
||||
BPF_LINE_INFO_ENC(5, 0, NAME_TBD, 6, 7),
|
||||
BTF_END_RAW,
|
||||
},
|
||||
.line_info_rec_size = sizeof(struct bpf_line_info),
|
||||
.nr_jited_ksyms = 1,
|
||||
.dead_code_cnt = 2,
|
||||
.dead_code_mask = 0x28,
|
||||
},
|
||||
|
||||
{
|
||||
.descr = "line_info (dead code + subprog + func_info)",
|
||||
.raw_types = {
|
||||
BTF_TYPE_INT_ENC(NAME_TBD, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
|
||||
BTF_FUNC_PROTO_ENC(1, 1), /* [2] */
|
||||
BTF_FUNC_PROTO_ARG_ENC(NAME_TBD, 1),
|
||||
BTF_FUNC_ENC(NAME_TBD, 2), /* [3] */
|
||||
BTF_FUNC_ENC(NAME_TBD, 2), /* [4] */
|
||||
BTF_END_RAW,
|
||||
},
|
||||
BTF_STR_SEC("\0int\0x\0sub\0main\0int a=1+1;\0/* dead jmp */"
|
||||
"\0/* dead */\0/* dead */\0/* dead */\0/* dead */"
|
||||
"\0/* dead */\0/* dead */\0/* dead */\0/* dead */"
|
||||
"\0return func(a);\0b+=1;\0return b;"),
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_2, 1),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 1),
|
||||
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
|
||||
BPF_JMP_IMM(BPF_JGE, BPF_REG_2, 0, 8),
|
||||
BPF_MOV64_IMM(BPF_REG_2, 1),
|
||||
BPF_MOV64_IMM(BPF_REG_2, 1),
|
||||
BPF_MOV64_IMM(BPF_REG_2, 1),
|
||||
BPF_MOV64_IMM(BPF_REG_2, 1),
|
||||
BPF_MOV64_IMM(BPF_REG_2, 1),
|
||||
BPF_MOV64_IMM(BPF_REG_2, 1),
|
||||
BPF_MOV64_IMM(BPF_REG_2, 1),
|
||||
BPF_MOV64_IMM(BPF_REG_2, 1),
|
||||
BPF_CALL_REL(1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_TRACEPOINT,
|
||||
.func_info_cnt = 2,
|
||||
.func_info_rec_size = 8,
|
||||
.func_info = { {0, 4}, {14, 3} },
|
||||
.line_info = {
|
||||
BPF_LINE_INFO_ENC(0, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(3, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(4, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(5, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(6, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(7, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(8, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(9, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(10, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(11, 0, NAME_TBD, 2, 9),
|
||||
BPF_LINE_INFO_ENC(12, 0, NAME_TBD, 2, 9),
|
||||
BPF_LINE_INFO_ENC(14, 0, NAME_TBD, 3, 8),
|
||||
BPF_LINE_INFO_ENC(16, 0, NAME_TBD, 4, 7),
|
||||
BTF_END_RAW,
|
||||
},
|
||||
.line_info_rec_size = sizeof(struct bpf_line_info),
|
||||
.nr_jited_ksyms = 2,
|
||||
.dead_code_cnt = 9,
|
||||
.dead_code_mask = 0x3fe,
|
||||
},
|
||||
|
||||
{
|
||||
.descr = "line_info (dead subprog)",
|
||||
.raw_types = {
|
||||
BTF_TYPE_INT_ENC(NAME_TBD, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
|
||||
BTF_FUNC_PROTO_ENC(1, 1), /* [2] */
|
||||
BTF_FUNC_PROTO_ARG_ENC(NAME_TBD, 1),
|
||||
BTF_FUNC_ENC(NAME_TBD, 2), /* [3] */
|
||||
BTF_FUNC_ENC(NAME_TBD, 2), /* [4] */
|
||||
BTF_FUNC_ENC(NAME_TBD, 2), /* [5] */
|
||||
BTF_END_RAW,
|
||||
},
|
||||
BTF_STR_SEC("\0int\0x\0dead\0main\0func\0int a=1+1;\0/* live call */"
|
||||
"\0return 0;\0return 0;\0/* dead */\0/* dead */"
|
||||
"\0/* dead */\0return bla + 1;\0return bla + 1;"
|
||||
"\0return bla + 1;\0return func(a);\0b+=1;\0return b;"),
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_2, 1),
|
||||
BPF_JMP_IMM(BPF_JGE, BPF_REG_2, 0, 1),
|
||||
BPF_CALL_REL(3),
|
||||
BPF_CALL_REL(5),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_CALL_REL(1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_MOV64_REG(BPF_REG_0, 2),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_TRACEPOINT,
|
||||
.func_info_cnt = 3,
|
||||
.func_info_rec_size = 8,
|
||||
.func_info = { {0, 4}, {6, 3}, {9, 5} },
|
||||
.line_info = {
|
||||
BPF_LINE_INFO_ENC(0, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(3, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(4, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(5, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(6, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(7, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(8, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(9, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(10, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(11, 0, NAME_TBD, 2, 9),
|
||||
BTF_END_RAW,
|
||||
},
|
||||
.line_info_rec_size = sizeof(struct bpf_line_info),
|
||||
.nr_jited_ksyms = 2,
|
||||
.dead_code_cnt = 3,
|
||||
.dead_code_mask = 0x70,
|
||||
.dead_func_cnt = 1,
|
||||
.dead_func_mask = 0x2,
|
||||
},
|
||||
|
||||
{
|
||||
.descr = "line_info (dead last subprog)",
|
||||
.raw_types = {
|
||||
BTF_TYPE_INT_ENC(NAME_TBD, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
|
||||
BTF_FUNC_PROTO_ENC(1, 1), /* [2] */
|
||||
BTF_FUNC_PROTO_ARG_ENC(NAME_TBD, 1),
|
||||
BTF_FUNC_ENC(NAME_TBD, 2), /* [3] */
|
||||
BTF_FUNC_ENC(NAME_TBD, 2), /* [5] */
|
||||
BTF_END_RAW,
|
||||
},
|
||||
BTF_STR_SEC("\0int\0x\0dead\0main\0int a=1+1;\0/* live call */"
|
||||
"\0return 0;\0/* dead */\0/* dead */"),
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_2, 1),
|
||||
BPF_JMP_IMM(BPF_JGE, BPF_REG_2, 0, 1),
|
||||
BPF_CALL_REL(2),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_TRACEPOINT,
|
||||
.func_info_cnt = 2,
|
||||
.func_info_rec_size = 8,
|
||||
.func_info = { {0, 4}, {5, 3} },
|
||||
.line_info = {
|
||||
BPF_LINE_INFO_ENC(0, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(3, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(4, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(5, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(6, 0, NAME_TBD, 1, 10),
|
||||
BTF_END_RAW,
|
||||
},
|
||||
.line_info_rec_size = sizeof(struct bpf_line_info),
|
||||
.nr_jited_ksyms = 1,
|
||||
.dead_code_cnt = 2,
|
||||
.dead_code_mask = 0x18,
|
||||
.dead_func_cnt = 1,
|
||||
.dead_func_mask = 0x2,
|
||||
},
|
||||
|
||||
{
|
||||
.descr = "line_info (dead subprog + dead start)",
|
||||
.raw_types = {
|
||||
BTF_TYPE_INT_ENC(NAME_TBD, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
|
||||
BTF_FUNC_PROTO_ENC(1, 1), /* [2] */
|
||||
BTF_FUNC_PROTO_ARG_ENC(NAME_TBD, 1),
|
||||
BTF_FUNC_ENC(NAME_TBD, 2), /* [3] */
|
||||
BTF_FUNC_ENC(NAME_TBD, 2), /* [4] */
|
||||
BTF_FUNC_ENC(NAME_TBD, 2), /* [5] */
|
||||
BTF_END_RAW,
|
||||
},
|
||||
BTF_STR_SEC("\0int\0x\0dead\0main\0func\0int a=1+1;\0/* dead */"
|
||||
"\0return 0;\0return 0;\0return 0;"
|
||||
"\0/* dead */\0/* dead */\0/* dead */\0/* dead */"
|
||||
"\0return b + 1;\0return b + 1;\0return b + 1;"),
|
||||
.insns = {
|
||||
BPF_JMP_IMM(BPF_JA, 0, 0, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_2, 1),
|
||||
BPF_JMP_IMM(BPF_JGE, BPF_REG_2, 0, 1),
|
||||
BPF_CALL_REL(3),
|
||||
BPF_CALL_REL(5),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_CALL_REL(1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_JMP_IMM(BPF_JA, 0, 0, 0),
|
||||
BPF_MOV64_REG(BPF_REG_0, 2),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_TRACEPOINT,
|
||||
.func_info_cnt = 3,
|
||||
.func_info_rec_size = 8,
|
||||
.func_info = { {0, 4}, {7, 3}, {10, 5} },
|
||||
.line_info = {
|
||||
BPF_LINE_INFO_ENC(0, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(3, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(4, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(5, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(6, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(7, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(8, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(9, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(10, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(11, 0, NAME_TBD, 2, 9),
|
||||
BPF_LINE_INFO_ENC(12, 0, NAME_TBD, 2, 9),
|
||||
BPF_LINE_INFO_ENC(13, 0, NAME_TBD, 2, 9),
|
||||
BTF_END_RAW,
|
||||
},
|
||||
.line_info_rec_size = sizeof(struct bpf_line_info),
|
||||
.nr_jited_ksyms = 2,
|
||||
.dead_code_cnt = 5,
|
||||
.dead_code_mask = 0x1e2,
|
||||
.dead_func_cnt = 1,
|
||||
.dead_func_mask = 0x2,
|
||||
},
|
||||
|
||||
{
|
||||
.descr = "line_info (dead subprog + dead start w/ move)",
|
||||
.raw_types = {
|
||||
BTF_TYPE_INT_ENC(NAME_TBD, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
|
||||
BTF_FUNC_PROTO_ENC(1, 1), /* [2] */
|
||||
BTF_FUNC_PROTO_ARG_ENC(NAME_TBD, 1),
|
||||
BTF_FUNC_ENC(NAME_TBD, 2), /* [3] */
|
||||
BTF_FUNC_ENC(NAME_TBD, 2), /* [4] */
|
||||
BTF_FUNC_ENC(NAME_TBD, 2), /* [5] */
|
||||
BTF_END_RAW,
|
||||
},
|
||||
BTF_STR_SEC("\0int\0x\0dead\0main\0func\0int a=1+1;\0/* live call */"
|
||||
"\0return 0;\0return 0;\0/* dead */\0/* dead */"
|
||||
"\0/* dead */\0return bla + 1;\0return bla + 1;"
|
||||
"\0return bla + 1;\0return func(a);\0b+=1;\0return b;"),
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_2, 1),
|
||||
BPF_JMP_IMM(BPF_JGE, BPF_REG_2, 0, 1),
|
||||
BPF_CALL_REL(3),
|
||||
BPF_CALL_REL(5),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_CALL_REL(1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_JMP_IMM(BPF_JA, 0, 0, 0),
|
||||
BPF_MOV64_REG(BPF_REG_0, 2),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_TRACEPOINT,
|
||||
.func_info_cnt = 3,
|
||||
.func_info_rec_size = 8,
|
||||
.func_info = { {0, 4}, {6, 3}, {9, 5} },
|
||||
.line_info = {
|
||||
BPF_LINE_INFO_ENC(0, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(3, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(4, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(5, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(6, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(7, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(8, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(9, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(11, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(12, 0, NAME_TBD, 2, 9),
|
||||
BTF_END_RAW,
|
||||
},
|
||||
.line_info_rec_size = sizeof(struct bpf_line_info),
|
||||
.nr_jited_ksyms = 2,
|
||||
.dead_code_cnt = 3,
|
||||
.dead_code_mask = 0x70,
|
||||
.dead_func_cnt = 1,
|
||||
.dead_func_mask = 0x2,
|
||||
},
|
||||
|
||||
{
|
||||
.descr = "line_info (dead end + subprog start w/ no linfo)",
|
||||
.raw_types = {
|
||||
BTF_TYPE_INT_ENC(NAME_TBD, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
|
||||
BTF_FUNC_PROTO_ENC(1, 1), /* [2] */
|
||||
BTF_FUNC_PROTO_ARG_ENC(NAME_TBD, 1),
|
||||
BTF_FUNC_ENC(NAME_TBD, 2), /* [3] */
|
||||
BTF_FUNC_ENC(NAME_TBD, 2), /* [4] */
|
||||
BTF_END_RAW,
|
||||
},
|
||||
BTF_STR_SEC("\0int\0x\0main\0func\0/* main linfo */\0/* func linfo */"),
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 1, 3),
|
||||
BPF_CALL_REL(3),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_JMP_IMM(BPF_JA, 0, 0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_TRACEPOINT,
|
||||
.func_info_cnt = 2,
|
||||
.func_info_rec_size = 8,
|
||||
.func_info = { {0, 3}, {6, 4}, },
|
||||
.line_info = {
|
||||
BPF_LINE_INFO_ENC(0, 0, NAME_TBD, 1, 10),
|
||||
BPF_LINE_INFO_ENC(6, 0, NAME_TBD, 1, 10),
|
||||
BTF_END_RAW,
|
||||
},
|
||||
.line_info_rec_size = sizeof(struct bpf_line_info),
|
||||
.nr_jited_ksyms = 2,
|
||||
},
|
||||
|
||||
};
|
||||
|
||||
static size_t probe_prog_length(const struct bpf_insn *fp)
|
||||
|
@ -4568,6 +5145,7 @@ static int test_get_finfo(const struct prog_info_raw_test *test,
|
|||
struct bpf_func_info *finfo;
|
||||
__u32 info_len, rec_size, i;
|
||||
void *func_info = NULL;
|
||||
__u32 nr_func_info;
|
||||
int err;
|
||||
|
||||
/* get necessary lens */
|
||||
|
@ -4577,7 +5155,8 @@ static int test_get_finfo(const struct prog_info_raw_test *test,
|
|||
fprintf(stderr, "%s\n", btf_log_buf);
|
||||
return -1;
|
||||
}
|
||||
if (CHECK(info.nr_func_info != test->func_info_cnt,
|
||||
nr_func_info = test->func_info_cnt - test->dead_func_cnt;
|
||||
if (CHECK(info.nr_func_info != nr_func_info,
|
||||
"incorrect info.nr_func_info (1st) %d",
|
||||
info.nr_func_info)) {
|
||||
return -1;
|
||||
|
@ -4598,7 +5177,7 @@ static int test_get_finfo(const struct prog_info_raw_test *test,
|
|||
|
||||
/* reset info to only retrieve func_info related data */
|
||||
memset(&info, 0, sizeof(info));
|
||||
info.nr_func_info = test->func_info_cnt;
|
||||
info.nr_func_info = nr_func_info;
|
||||
info.func_info_rec_size = rec_size;
|
||||
info.func_info = ptr_to_u64(func_info);
|
||||
err = bpf_obj_get_info_by_fd(prog_fd, &info, &info_len);
|
||||
|
@ -4607,7 +5186,7 @@ static int test_get_finfo(const struct prog_info_raw_test *test,
|
|||
err = -1;
|
||||
goto done;
|
||||
}
|
||||
if (CHECK(info.nr_func_info != test->func_info_cnt,
|
||||
if (CHECK(info.nr_func_info != nr_func_info,
|
||||
"incorrect info.nr_func_info (2nd) %d",
|
||||
info.nr_func_info)) {
|
||||
err = -1;
|
||||
|
@ -4621,7 +5200,9 @@ static int test_get_finfo(const struct prog_info_raw_test *test,
|
|||
}
|
||||
|
||||
finfo = func_info;
|
||||
for (i = 0; i < test->func_info_cnt; i++) {
|
||||
for (i = 0; i < nr_func_info; i++) {
|
||||
if (test->dead_func_mask & (1 << i))
|
||||
continue;
|
||||
if (CHECK(finfo->type_id != test->func_info[i][1],
|
||||
"incorrect func_type %u expected %u",
|
||||
finfo->type_id, test->func_info[i][1])) {
|
||||
|
@ -4650,6 +5231,7 @@ static int test_get_linfo(const struct prog_info_raw_test *test,
|
|||
struct bpf_prog_info info = {};
|
||||
__u32 *jited_func_lens = NULL;
|
||||
__u64 cur_func_ksyms;
|
||||
__u32 dead_insns;
|
||||
int err;
|
||||
|
||||
jited_cnt = cnt;
|
||||
|
@ -4658,7 +5240,7 @@ static int test_get_linfo(const struct prog_info_raw_test *test,
|
|||
if (test->nr_jited_ksyms)
|
||||
nr_jited_ksyms = test->nr_jited_ksyms;
|
||||
else
|
||||
nr_jited_ksyms = test->func_info_cnt;
|
||||
nr_jited_ksyms = test->func_info_cnt - test->dead_func_cnt;
|
||||
nr_jited_func_lens = nr_jited_ksyms;
|
||||
|
||||
info_len = sizeof(struct bpf_prog_info);
|
||||
|
@ -4760,12 +5342,20 @@ static int test_get_linfo(const struct prog_info_raw_test *test,
|
|||
goto done;
|
||||
}
|
||||
|
||||
dead_insns = 0;
|
||||
while (test->dead_code_mask & (1 << dead_insns))
|
||||
dead_insns++;
|
||||
|
||||
CHECK(linfo[0].insn_off, "linfo[0].insn_off:%u",
|
||||
linfo[0].insn_off);
|
||||
for (i = 1; i < cnt; i++) {
|
||||
const struct bpf_line_info *expected_linfo;
|
||||
|
||||
expected_linfo = patched_linfo + (i * test->line_info_rec_size);
|
||||
while (test->dead_code_mask & (1 << (i + dead_insns)))
|
||||
dead_insns++;
|
||||
|
||||
expected_linfo = patched_linfo +
|
||||
((i + dead_insns) * test->line_info_rec_size);
|
||||
if (CHECK(linfo[i].insn_off <= linfo[i - 1].insn_off,
|
||||
"linfo[%u].insn_off:%u <= linfo[%u].insn_off:%u",
|
||||
i, linfo[i].insn_off,
|
||||
|
@ -4923,7 +5513,9 @@ static int do_test_info_raw(unsigned int test_num)
|
|||
if (err)
|
||||
goto done;
|
||||
|
||||
err = test_get_linfo(test, patched_linfo, attr.line_info_cnt, prog_fd);
|
||||
err = test_get_linfo(test, patched_linfo,
|
||||
attr.line_info_cnt - test->dead_code_cnt,
|
||||
prog_fd);
|
||||
if (err)
|
||||
goto done;
|
||||
|
||||
|
|
|
@ -16,7 +16,6 @@
|
|||
#include <errno.h>
|
||||
#include <linux/if_packet.h>
|
||||
#include <linux/if_ether.h>
|
||||
#include <linux/if_packet.h>
|
||||
#include <linux/ipv6.h>
|
||||
#include <netinet/ip.h>
|
||||
#include <netinet/in.h>
|
||||
|
@ -25,7 +24,6 @@
|
|||
#include <stdbool.h>
|
||||
#include <stdlib.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include <sys/ioctl.h>
|
||||
#include <sys/socket.h>
|
||||
|
|
|
@ -633,7 +633,6 @@ static void test_stackmap(int task, void *data)
|
|||
close(fd);
|
||||
}
|
||||
|
||||
#include <sys/socket.h>
|
||||
#include <sys/ioctl.h>
|
||||
#include <arpa/inet.h>
|
||||
#include <sys/select.h>
|
||||
|
|
|
@ -39,6 +39,7 @@ typedef __u16 __sum16;
|
|||
#include "bpf_endian.h"
|
||||
#include "bpf_rlimit.h"
|
||||
#include "trace_helpers.h"
|
||||
#include "flow_dissector_load.h"
|
||||
|
||||
static int error_cnt, pass_cnt;
|
||||
static bool jit_enabled;
|
||||
|
@ -53,9 +54,10 @@ static struct {
|
|||
} __packed pkt_v4 = {
|
||||
.eth.h_proto = __bpf_constant_htons(ETH_P_IP),
|
||||
.iph.ihl = 5,
|
||||
.iph.protocol = 6,
|
||||
.iph.protocol = IPPROTO_TCP,
|
||||
.iph.tot_len = __bpf_constant_htons(MAGIC_BYTES),
|
||||
.tcp.urg_ptr = 123,
|
||||
.tcp.doff = 5,
|
||||
};
|
||||
|
||||
/* ipv6 test vector */
|
||||
|
@ -65,9 +67,10 @@ static struct {
|
|||
struct tcphdr tcp;
|
||||
} __packed pkt_v6 = {
|
||||
.eth.h_proto = __bpf_constant_htons(ETH_P_IPV6),
|
||||
.iph.nexthdr = 6,
|
||||
.iph.nexthdr = IPPROTO_TCP,
|
||||
.iph.payload_len = __bpf_constant_htons(MAGIC_BYTES),
|
||||
.tcp.urg_ptr = 123,
|
||||
.tcp.doff = 5,
|
||||
};
|
||||
|
||||
#define _CHECK(condition, tag, duration, format...) ({ \
|
||||
|
@ -1912,6 +1915,76 @@ out:
|
|||
bpf_object__close(obj);
|
||||
}
|
||||
|
||||
#define CHECK_FLOW_KEYS(desc, got, expected) \
|
||||
CHECK(memcmp(&got, &expected, sizeof(got)) != 0, \
|
||||
desc, \
|
||||
"nhoff=%u/%u " \
|
||||
"thoff=%u/%u " \
|
||||
"addr_proto=0x%x/0x%x " \
|
||||
"is_frag=%u/%u " \
|
||||
"is_first_frag=%u/%u " \
|
||||
"is_encap=%u/%u " \
|
||||
"n_proto=0x%x/0x%x " \
|
||||
"sport=%u/%u " \
|
||||
"dport=%u/%u\n", \
|
||||
got.nhoff, expected.nhoff, \
|
||||
got.thoff, expected.thoff, \
|
||||
got.addr_proto, expected.addr_proto, \
|
||||
got.is_frag, expected.is_frag, \
|
||||
got.is_first_frag, expected.is_first_frag, \
|
||||
got.is_encap, expected.is_encap, \
|
||||
got.n_proto, expected.n_proto, \
|
||||
got.sport, expected.sport, \
|
||||
got.dport, expected.dport)
|
||||
|
||||
static struct bpf_flow_keys pkt_v4_flow_keys = {
|
||||
.nhoff = 0,
|
||||
.thoff = sizeof(struct iphdr),
|
||||
.addr_proto = ETH_P_IP,
|
||||
.ip_proto = IPPROTO_TCP,
|
||||
.n_proto = bpf_htons(ETH_P_IP),
|
||||
};
|
||||
|
||||
static struct bpf_flow_keys pkt_v6_flow_keys = {
|
||||
.nhoff = 0,
|
||||
.thoff = sizeof(struct ipv6hdr),
|
||||
.addr_proto = ETH_P_IPV6,
|
||||
.ip_proto = IPPROTO_TCP,
|
||||
.n_proto = bpf_htons(ETH_P_IPV6),
|
||||
};
|
||||
|
||||
static void test_flow_dissector(void)
|
||||
{
|
||||
struct bpf_flow_keys flow_keys;
|
||||
struct bpf_object *obj;
|
||||
__u32 duration, retval;
|
||||
int err, prog_fd;
|
||||
__u32 size;
|
||||
|
||||
err = bpf_flow_load(&obj, "./bpf_flow.o", "flow_dissector",
|
||||
"jmp_table", &prog_fd);
|
||||
if (err) {
|
||||
error_cnt++;
|
||||
return;
|
||||
}
|
||||
|
||||
err = bpf_prog_test_run(prog_fd, 10, &pkt_v4, sizeof(pkt_v4),
|
||||
&flow_keys, &size, &retval, &duration);
|
||||
CHECK(size != sizeof(flow_keys) || err || retval != 1, "ipv4",
|
||||
"err %d errno %d retval %d duration %d size %u/%lu\n",
|
||||
err, errno, retval, duration, size, sizeof(flow_keys));
|
||||
CHECK_FLOW_KEYS("ipv4_flow_keys", flow_keys, pkt_v4_flow_keys);
|
||||
|
||||
err = bpf_prog_test_run(prog_fd, 10, &pkt_v6, sizeof(pkt_v6),
|
||||
&flow_keys, &size, &retval, &duration);
|
||||
CHECK(size != sizeof(flow_keys) || err || retval != 1, "ipv6",
|
||||
"err %d errno %d retval %d duration %d size %u/%lu\n",
|
||||
err, errno, retval, duration, size, sizeof(flow_keys));
|
||||
CHECK_FLOW_KEYS("ipv6_flow_keys", flow_keys, pkt_v6_flow_keys);
|
||||
|
||||
bpf_object__close(obj);
|
||||
}
|
||||
|
||||
int main(void)
|
||||
{
|
||||
srand(time(NULL));
|
||||
|
@ -1939,6 +2012,7 @@ int main(void)
|
|||
test_reference_tracking();
|
||||
test_queue_stack_map(QUEUE);
|
||||
test_queue_stack_map(STACK);
|
||||
test_flow_dissector();
|
||||
|
||||
printf("Summary: %d PASSED, %d FAILED\n", pass_cnt, error_cnt);
|
||||
return error_cnt ? EXIT_FAILURE : EXIT_SUCCESS;
|
||||
|
|
|
@ -158,10 +158,8 @@ static int run_test(int cgfd)
|
|||
bpf_object__for_each_program(prog, pobj) {
|
||||
prog_name = bpf_program__title(prog, /*needs_copy*/ false);
|
||||
|
||||
if (libbpf_attach_type_by_name(prog_name, &attach_type)) {
|
||||
log_err("Unexpected prog: %s", prog_name);
|
||||
if (libbpf_attach_type_by_name(prog_name, &attach_type))
|
||||
goto err;
|
||||
}
|
||||
|
||||
err = bpf_prog_attach(bpf_program__fd(prog), cgfd, attach_type,
|
||||
BPF_F_ALLOW_OVERRIDE);
|
||||
|
|
|
@ -10,7 +10,6 @@
|
|||
#include <unistd.h>
|
||||
#include <string.h>
|
||||
#include <errno.h>
|
||||
#include <sys/ioctl.h>
|
||||
#include <stdbool.h>
|
||||
#include <signal.h>
|
||||
#include <fcntl.h>
|
||||
|
|
|
@ -148,17 +148,17 @@ int main(int argc, char **argv)
|
|||
pthread_create(&tid, NULL, poller_thread, (void *)&pmu_fd);
|
||||
|
||||
sprintf(test_script,
|
||||
"/usr/sbin/iptables -A INPUT -p tcp --dport %d -j DROP",
|
||||
"iptables -A INPUT -p tcp --dport %d -j DROP",
|
||||
TESTPORT);
|
||||
system(test_script);
|
||||
|
||||
sprintf(test_script,
|
||||
"/usr/bin/nc 127.0.0.1 %d < /etc/passwd > /dev/null 2>&1 ",
|
||||
"nc 127.0.0.1 %d < /etc/passwd > /dev/null 2>&1 ",
|
||||
TESTPORT);
|
||||
system(test_script);
|
||||
|
||||
sprintf(test_script,
|
||||
"/usr/sbin/iptables -D INPUT -p tcp --dport %d -j DROP",
|
||||
"iptables -D INPUT -p tcp --dport %d -j DROP",
|
||||
TESTPORT);
|
||||
system(test_script);
|
||||
|
||||
|
|
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
|
@ -0,0 +1 @@
|
|||
tests.h
|
|
@ -0,0 +1,50 @@
|
|||
{
|
||||
"invalid and of negative number",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
|
||||
BPF_ALU64_IMM(BPF_AND, BPF_REG_1, -4),
|
||||
BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 2),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, offsetof(struct test_val, foo)),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_48b = { 3 },
|
||||
.errstr = "R0 max value is outside of the array range",
|
||||
.result = REJECT,
|
||||
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
|
||||
},
|
||||
{
|
||||
"invalid range check",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 12),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_9, 1),
|
||||
BPF_ALU32_IMM(BPF_MOD, BPF_REG_1, 2),
|
||||
BPF_ALU32_IMM(BPF_ADD, BPF_REG_1, 1),
|
||||
BPF_ALU32_REG(BPF_AND, BPF_REG_9, BPF_REG_1),
|
||||
BPF_ALU32_IMM(BPF_ADD, BPF_REG_9, 1),
|
||||
BPF_ALU32_IMM(BPF_RSH, BPF_REG_9, 1),
|
||||
BPF_MOV32_IMM(BPF_REG_3, 1),
|
||||
BPF_ALU32_REG(BPF_SUB, BPF_REG_3, BPF_REG_9),
|
||||
BPF_ALU32_IMM(BPF_MUL, BPF_REG_3, 0x10000000),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_3),
|
||||
BPF_STX_MEM(BPF_W, BPF_REG_0, BPF_REG_3, 0),
|
||||
BPF_MOV64_REG(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_48b = { 3 },
|
||||
.errstr = "R0 max value is outside of the array range",
|
||||
.result = REJECT,
|
||||
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
|
||||
},
|
|
@ -0,0 +1,219 @@
|
|||
{
|
||||
"valid map access into an array with a constant",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, offsetof(struct test_val, foo)),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_48b = { 3 },
|
||||
.errstr_unpriv = "R0 leaks addr",
|
||||
.result_unpriv = REJECT,
|
||||
.result = ACCEPT,
|
||||
},
|
||||
{
|
||||
"valid map access into an array with a register",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
|
||||
BPF_MOV64_IMM(BPF_REG_1, 4),
|
||||
BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 2),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, offsetof(struct test_val, foo)),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_48b = { 3 },
|
||||
.errstr_unpriv = "R0 leaks addr",
|
||||
.result_unpriv = REJECT,
|
||||
.result = ACCEPT,
|
||||
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
|
||||
},
|
||||
{
|
||||
"valid map access into an array with a variable",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
|
||||
BPF_JMP_IMM(BPF_JGE, BPF_REG_1, MAX_ENTRIES, 3),
|
||||
BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 2),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, offsetof(struct test_val, foo)),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_48b = { 3 },
|
||||
.errstr_unpriv = "R0 leaks addr",
|
||||
.result_unpriv = REJECT,
|
||||
.result = ACCEPT,
|
||||
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
|
||||
},
|
||||
{
|
||||
"valid map access into an array with a signed variable",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
|
||||
BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 0xffffffff, 1),
|
||||
BPF_MOV32_IMM(BPF_REG_1, 0),
|
||||
BPF_MOV32_IMM(BPF_REG_2, MAX_ENTRIES),
|
||||
BPF_JMP_REG(BPF_JSGT, BPF_REG_2, BPF_REG_1, 1),
|
||||
BPF_MOV32_IMM(BPF_REG_1, 0),
|
||||
BPF_ALU32_IMM(BPF_LSH, BPF_REG_1, 2),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, offsetof(struct test_val, foo)),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_48b = { 3 },
|
||||
.errstr_unpriv = "R0 leaks addr",
|
||||
.result_unpriv = REJECT,
|
||||
.result = ACCEPT,
|
||||
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
|
||||
},
|
||||
{
|
||||
"invalid map access into an array with a constant",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_0, (MAX_ENTRIES + 1) << 2,
|
||||
offsetof(struct test_val, foo)),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_48b = { 3 },
|
||||
.errstr = "invalid access to map value, value_size=48 off=48 size=8",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"invalid map access into an array with a register",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
|
||||
BPF_MOV64_IMM(BPF_REG_1, MAX_ENTRIES + 1),
|
||||
BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 2),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, offsetof(struct test_val, foo)),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_48b = { 3 },
|
||||
.errstr = "R0 min value is outside of the array range",
|
||||
.result = REJECT,
|
||||
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
|
||||
},
|
||||
{
|
||||
"invalid map access into an array with a variable",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
|
||||
BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 2),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, offsetof(struct test_val, foo)),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_48b = { 3 },
|
||||
.errstr = "R0 unbounded memory access, make sure to bounds check any array access into a map",
|
||||
.result = REJECT,
|
||||
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
|
||||
},
|
||||
{
|
||||
"invalid map access into an array with no floor check",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0),
|
||||
BPF_MOV32_IMM(BPF_REG_2, MAX_ENTRIES),
|
||||
BPF_JMP_REG(BPF_JSGT, BPF_REG_2, BPF_REG_1, 1),
|
||||
BPF_MOV32_IMM(BPF_REG_1, 0),
|
||||
BPF_ALU32_IMM(BPF_LSH, BPF_REG_1, 2),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, offsetof(struct test_val, foo)),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_48b = { 3 },
|
||||
.errstr_unpriv = "R0 leaks addr",
|
||||
.errstr = "R0 unbounded memory access",
|
||||
.result_unpriv = REJECT,
|
||||
.result = REJECT,
|
||||
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
|
||||
},
|
||||
{
|
||||
"invalid map access into an array with a invalid max check",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
|
||||
BPF_MOV32_IMM(BPF_REG_2, MAX_ENTRIES + 1),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 1),
|
||||
BPF_MOV32_IMM(BPF_REG_1, 0),
|
||||
BPF_ALU32_IMM(BPF_LSH, BPF_REG_1, 2),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, offsetof(struct test_val, foo)),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_48b = { 3 },
|
||||
.errstr_unpriv = "R0 leaks addr",
|
||||
.errstr = "invalid access to map value, value_size=48 off=44 size=8",
|
||||
.result_unpriv = REJECT,
|
||||
.result = REJECT,
|
||||
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
|
||||
},
|
||||
{
|
||||
"invalid map access into an array with a invalid max check",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 10),
|
||||
BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_8),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0,
|
||||
offsetof(struct test_val, foo)),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_48b = { 3, 11 },
|
||||
.errstr = "R0 pointer += pointer",
|
||||
.result = REJECT,
|
||||
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
|
||||
},
|
|
@ -0,0 +1,23 @@
|
|||
{
|
||||
"empty prog",
|
||||
.insns = {
|
||||
},
|
||||
.errstr = "unknown opcode 00",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"only exit insn",
|
||||
.insns = {
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "R0 !read_ok",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"no bpf_exit",
|
||||
.insns = {
|
||||
BPF_ALU64_REG(BPF_MOV, BPF_REG_0, BPF_REG_2),
|
||||
},
|
||||
.errstr = "not an exit",
|
||||
.result = REJECT,
|
||||
},
|
|
@ -0,0 +1,50 @@
|
|||
{
|
||||
"invalid call insn1",
|
||||
.insns = {
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL | BPF_X, 0, 0, 0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "unknown opcode 8d",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"invalid call insn2",
|
||||
.insns = {
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 1, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "BPF_CALL uses reserved",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"invalid function call",
|
||||
.insns = {
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, 1234567),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "invalid func unknown#1234567",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"invalid argument register",
|
||||
.insns = {
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_cgroup_classid),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_cgroup_classid),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "R1 !read_ok",
|
||||
.result = REJECT,
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
},
|
||||
{
|
||||
"non-invalid argument register",
|
||||
.insns = {
|
||||
BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_1),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_cgroup_classid),
|
||||
BPF_ALU64_REG(BPF_MOV, BPF_REG_1, BPF_REG_6),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_cgroup_classid),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
},
|
|
@ -0,0 +1,134 @@
|
|||
{
|
||||
"add+sub+mul",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_1, 1),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 2),
|
||||
BPF_MOV64_IMM(BPF_REG_2, 3),
|
||||
BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_2),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -1),
|
||||
BPF_ALU64_IMM(BPF_MUL, BPF_REG_1, 3),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.retval = -3,
|
||||
},
|
||||
{
|
||||
"xor32 zero extend check",
|
||||
.insns = {
|
||||
BPF_MOV32_IMM(BPF_REG_2, -1),
|
||||
BPF_ALU64_IMM(BPF_LSH, BPF_REG_2, 32),
|
||||
BPF_ALU64_IMM(BPF_OR, BPF_REG_2, 0xffff),
|
||||
BPF_ALU32_REG(BPF_XOR, BPF_REG_2, BPF_REG_2),
|
||||
BPF_MOV32_IMM(BPF_REG_0, 2),
|
||||
BPF_JMP_IMM(BPF_JNE, BPF_REG_2, 0, 1),
|
||||
BPF_MOV32_IMM(BPF_REG_0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.result = ACCEPT,
|
||||
.retval = 1,
|
||||
},
|
||||
{
|
||||
"arsh32 on imm",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 1),
|
||||
BPF_ALU32_IMM(BPF_ARSH, BPF_REG_0, 5),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.retval = 0,
|
||||
},
|
||||
{
|
||||
"arsh32 on imm 2",
|
||||
.insns = {
|
||||
BPF_LD_IMM64(BPF_REG_0, 0x1122334485667788),
|
||||
BPF_ALU32_IMM(BPF_ARSH, BPF_REG_0, 7),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.retval = -16069393,
|
||||
},
|
||||
{
|
||||
"arsh32 on reg",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 1),
|
||||
BPF_MOV64_IMM(BPF_REG_1, 5),
|
||||
BPF_ALU32_REG(BPF_ARSH, BPF_REG_0, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.retval = 0,
|
||||
},
|
||||
{
|
||||
"arsh32 on reg 2",
|
||||
.insns = {
|
||||
BPF_LD_IMM64(BPF_REG_0, 0xffff55667788),
|
||||
BPF_MOV64_IMM(BPF_REG_1, 15),
|
||||
BPF_ALU32_REG(BPF_ARSH, BPF_REG_0, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.retval = 43724,
|
||||
},
|
||||
{
|
||||
"arsh64 on imm",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 1),
|
||||
BPF_ALU64_IMM(BPF_ARSH, BPF_REG_0, 5),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
},
|
||||
{
|
||||
"arsh64 on reg",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 1),
|
||||
BPF_MOV64_IMM(BPF_REG_1, 5),
|
||||
BPF_ALU64_REG(BPF_ARSH, BPF_REG_0, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
},
|
||||
{
|
||||
"invalid 64-bit BPF_END",
|
||||
.insns = {
|
||||
BPF_MOV32_IMM(BPF_REG_0, 0),
|
||||
{
|
||||
.code = BPF_ALU64 | BPF_END | BPF_TO_LE,
|
||||
.dst_reg = BPF_REG_0,
|
||||
.src_reg = 0,
|
||||
.off = 0,
|
||||
.imm = 32,
|
||||
},
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "unknown opcode d7",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"mov64 src == dst",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_2, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_2),
|
||||
// Check bounds are OK
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_2),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.result = ACCEPT,
|
||||
},
|
||||
{
|
||||
"mov64 src != dst",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_3, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_3),
|
||||
// Check bounds are OK
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_2),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.result = ACCEPT,
|
||||
},
|
|
@ -0,0 +1,64 @@
|
|||
{
|
||||
"stack out of bounds",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, 8, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "invalid stack",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"uninitialized stack1",
|
||||
.insns = {
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 2 },
|
||||
.errstr = "invalid indirect read from stack",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"uninitialized stack2",
|
||||
.insns = {
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, -8),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "invalid read from stack",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"invalid fp arithmetic",
|
||||
/* If this gets ever changed, make sure JITs can deal with it. */
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 8),
|
||||
BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "R1 subtraction from stack pointer",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"non-invalid fp arithmetic",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
},
|
||||
{
|
||||
"misaligned read from stack",
|
||||
.insns = {
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, -4),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "misaligned stack access",
|
||||
.result = REJECT,
|
||||
},
|
|
@ -0,0 +1,45 @@
|
|||
{
|
||||
"invalid src register in STX",
|
||||
.insns = {
|
||||
BPF_STX_MEM(BPF_B, BPF_REG_10, -1, -1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "R15 is invalid",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"invalid dst register in STX",
|
||||
.insns = {
|
||||
BPF_STX_MEM(BPF_B, 14, BPF_REG_10, -1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "R14 is invalid",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"invalid dst register in ST",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_B, 14, -1, -1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "R14 is invalid",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"invalid src register in LDX",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, 12, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "R12 is invalid",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"invalid dst register in LDX",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_B, 11, BPF_REG_1, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "R11 is invalid",
|
||||
.result = REJECT,
|
||||
},
|
|
@ -0,0 +1,508 @@
|
|||
{
|
||||
"subtraction bounds (map value) variant 1",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
|
||||
BPF_JMP_IMM(BPF_JGT, BPF_REG_1, 0xff, 7),
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_3, BPF_REG_0, 1),
|
||||
BPF_JMP_IMM(BPF_JGT, BPF_REG_3, 0xff, 5),
|
||||
BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_3),
|
||||
BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 56),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.errstr = "R0 max value is outside of the array range",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"subtraction bounds (map value) variant 2",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
|
||||
BPF_JMP_IMM(BPF_JGT, BPF_REG_1, 0xff, 6),
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_3, BPF_REG_0, 1),
|
||||
BPF_JMP_IMM(BPF_JGT, BPF_REG_3, 0xff, 4),
|
||||
BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_3),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.errstr = "R0 min value is negative, either use unsigned index or do a if (index >=0) check.",
|
||||
.errstr_unpriv = "R1 has unknown scalar with mixed signed bounds",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"check subtraction on pointers for unpriv",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_LD_MAP_FD(BPF_REG_ARG1, 0),
|
||||
BPF_MOV64_REG(BPF_REG_ARG2, BPF_REG_FP),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_ARG2, -8),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_ARG2, 0, 9),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_MOV64_REG(BPF_REG_9, BPF_REG_FP),
|
||||
BPF_ALU64_REG(BPF_SUB, BPF_REG_9, BPF_REG_0),
|
||||
BPF_LD_MAP_FD(BPF_REG_ARG1, 0),
|
||||
BPF_MOV64_REG(BPF_REG_ARG2, BPF_REG_FP),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_ARG2, -8),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_ARG2, 0, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_9, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 1, 9 },
|
||||
.result = ACCEPT,
|
||||
.result_unpriv = REJECT,
|
||||
.errstr_unpriv = "R9 pointer -= pointer prohibited",
|
||||
},
|
||||
{
|
||||
"bounds check based on zero-extended MOV",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
|
||||
/* r2 = 0x0000'0000'ffff'ffff */
|
||||
BPF_MOV32_IMM(BPF_REG_2, 0xffffffff),
|
||||
/* r2 = 0 */
|
||||
BPF_ALU64_IMM(BPF_RSH, BPF_REG_2, 32),
|
||||
/* no-op */
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_2),
|
||||
/* access at offset 0 */
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
|
||||
/* exit */
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.result = ACCEPT
|
||||
},
|
||||
{
|
||||
"bounds check based on sign-extended MOV. test1",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
|
||||
/* r2 = 0xffff'ffff'ffff'ffff */
|
||||
BPF_MOV64_IMM(BPF_REG_2, 0xffffffff),
|
||||
/* r2 = 0xffff'ffff */
|
||||
BPF_ALU64_IMM(BPF_RSH, BPF_REG_2, 32),
|
||||
/* r0 = <oob pointer> */
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_2),
|
||||
/* access to OOB pointer */
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
|
||||
/* exit */
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.errstr = "map_value pointer and 4294967295",
|
||||
.result = REJECT
|
||||
},
|
||||
{
|
||||
"bounds check based on sign-extended MOV. test2",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
|
||||
/* r2 = 0xffff'ffff'ffff'ffff */
|
||||
BPF_MOV64_IMM(BPF_REG_2, 0xffffffff),
|
||||
/* r2 = 0xfff'ffff */
|
||||
BPF_ALU64_IMM(BPF_RSH, BPF_REG_2, 36),
|
||||
/* r0 = <oob pointer> */
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_2),
|
||||
/* access to OOB pointer */
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
|
||||
/* exit */
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.errstr = "R0 min value is outside of the array range",
|
||||
.result = REJECT
|
||||
},
|
||||
{
|
||||
"bounds check based on reg_off + var_off + insn_off. test1",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, mark)),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
|
||||
BPF_ALU64_IMM(BPF_AND, BPF_REG_6, 1),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, (1 << 29) - 1),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_6),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, (1 << 29) - 1),
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 3),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 4 },
|
||||
.errstr = "value_size=8 off=1073741825",
|
||||
.result = REJECT,
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
},
|
||||
{
|
||||
"bounds check based on reg_off + var_off + insn_off. test2",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, mark)),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
|
||||
BPF_ALU64_IMM(BPF_AND, BPF_REG_6, 1),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, (1 << 30) - 1),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_6),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, (1 << 29) - 1),
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 3),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 4 },
|
||||
.errstr = "value 1073741823",
|
||||
.result = REJECT,
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
},
|
||||
{
|
||||
"bounds check after truncation of non-boundary-crossing range",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
|
||||
/* r1 = [0x00, 0xff] */
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_2, 1),
|
||||
/* r2 = 0x10'0000'0000 */
|
||||
BPF_ALU64_IMM(BPF_LSH, BPF_REG_2, 36),
|
||||
/* r1 = [0x10'0000'0000, 0x10'0000'00ff] */
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_2),
|
||||
/* r1 = [0x10'7fff'ffff, 0x10'8000'00fe] */
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x7fffffff),
|
||||
/* r1 = [0x00, 0xff] */
|
||||
BPF_ALU32_IMM(BPF_SUB, BPF_REG_1, 0x7fffffff),
|
||||
/* r1 = 0 */
|
||||
BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 8),
|
||||
/* no-op */
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
|
||||
/* access at offset 0 */
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
|
||||
/* exit */
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.result = ACCEPT
|
||||
},
|
||||
{
|
||||
"bounds check after truncation of boundary-crossing range (1)",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
|
||||
/* r1 = [0x00, 0xff] */
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0xffffff80 >> 1),
|
||||
/* r1 = [0xffff'ff80, 0x1'0000'007f] */
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0xffffff80 >> 1),
|
||||
/* r1 = [0xffff'ff80, 0xffff'ffff] or
|
||||
* [0x0000'0000, 0x0000'007f]
|
||||
*/
|
||||
BPF_ALU32_IMM(BPF_ADD, BPF_REG_1, 0),
|
||||
BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 0xffffff80 >> 1),
|
||||
/* r1 = [0x00, 0xff] or
|
||||
* [0xffff'ffff'0000'0080, 0xffff'ffff'ffff'ffff]
|
||||
*/
|
||||
BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 0xffffff80 >> 1),
|
||||
/* r1 = 0 or
|
||||
* [0x00ff'ffff'ff00'0000, 0x00ff'ffff'ffff'ffff]
|
||||
*/
|
||||
BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 8),
|
||||
/* no-op or OOB pointer computation */
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
|
||||
/* potentially OOB access */
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
|
||||
/* exit */
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
/* not actually fully unbounded, but the bound is very high */
|
||||
.errstr = "R0 unbounded memory access",
|
||||
.result = REJECT
|
||||
},
|
||||
{
|
||||
"bounds check after truncation of boundary-crossing range (2)",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
|
||||
/* r1 = [0x00, 0xff] */
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0xffffff80 >> 1),
|
||||
/* r1 = [0xffff'ff80, 0x1'0000'007f] */
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0xffffff80 >> 1),
|
||||
/* r1 = [0xffff'ff80, 0xffff'ffff] or
|
||||
* [0x0000'0000, 0x0000'007f]
|
||||
* difference to previous test: truncation via MOV32
|
||||
* instead of ALU32.
|
||||
*/
|
||||
BPF_MOV32_REG(BPF_REG_1, BPF_REG_1),
|
||||
BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 0xffffff80 >> 1),
|
||||
/* r1 = [0x00, 0xff] or
|
||||
* [0xffff'ffff'0000'0080, 0xffff'ffff'ffff'ffff]
|
||||
*/
|
||||
BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 0xffffff80 >> 1),
|
||||
/* r1 = 0 or
|
||||
* [0x00ff'ffff'ff00'0000, 0x00ff'ffff'ffff'ffff]
|
||||
*/
|
||||
BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 8),
|
||||
/* no-op or OOB pointer computation */
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
|
||||
/* potentially OOB access */
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
|
||||
/* exit */
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
/* not actually fully unbounded, but the bound is very high */
|
||||
.errstr = "R0 unbounded memory access",
|
||||
.result = REJECT
|
||||
},
|
||||
{
|
||||
"bounds check after wrapping 32-bit addition",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
|
||||
/* r1 = 0x7fff'ffff */
|
||||
BPF_MOV64_IMM(BPF_REG_1, 0x7fffffff),
|
||||
/* r1 = 0xffff'fffe */
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x7fffffff),
|
||||
/* r1 = 0 */
|
||||
BPF_ALU32_IMM(BPF_ADD, BPF_REG_1, 2),
|
||||
/* no-op */
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
|
||||
/* access at offset 0 */
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
|
||||
/* exit */
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.result = ACCEPT
|
||||
},
|
||||
{
|
||||
"bounds check after shift with oversized count operand",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
|
||||
BPF_MOV64_IMM(BPF_REG_2, 32),
|
||||
BPF_MOV64_IMM(BPF_REG_1, 1),
|
||||
/* r1 = (u32)1 << (u32)32 = ? */
|
||||
BPF_ALU32_REG(BPF_LSH, BPF_REG_1, BPF_REG_2),
|
||||
/* r1 = [0x0000, 0xffff] */
|
||||
BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0xffff),
|
||||
/* computes unknown pointer, potentially OOB */
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
|
||||
/* potentially OOB access */
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
|
||||
/* exit */
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.errstr = "R0 max value is outside of the array range",
|
||||
.result = REJECT
|
||||
},
|
||||
{
|
||||
"bounds check after right shift of maybe-negative number",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
|
||||
/* r1 = [0x00, 0xff] */
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
|
||||
/* r1 = [-0x01, 0xfe] */
|
||||
BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 1),
|
||||
/* r1 = 0 or 0xff'ffff'ffff'ffff */
|
||||
BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 8),
|
||||
/* r1 = 0 or 0xffff'ffff'ffff */
|
||||
BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 8),
|
||||
/* computes unknown pointer, potentially OOB */
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
|
||||
/* potentially OOB access */
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
|
||||
/* exit */
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.errstr = "R0 unbounded memory access",
|
||||
.result = REJECT
|
||||
},
|
||||
{
|
||||
"bounds check after 32-bit right shift with 64-bit input",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
|
||||
/* r1 = 2 */
|
||||
BPF_MOV64_IMM(BPF_REG_1, 2),
|
||||
/* r1 = 1<<32 */
|
||||
BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 31),
|
||||
/* r1 = 0 (NOT 2!) */
|
||||
BPF_ALU32_IMM(BPF_RSH, BPF_REG_1, 31),
|
||||
/* r1 = 0xffff'fffe (NOT 0!) */
|
||||
BPF_ALU32_IMM(BPF_SUB, BPF_REG_1, 2),
|
||||
/* computes OOB pointer */
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
|
||||
/* OOB access */
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
|
||||
/* exit */
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.errstr = "R0 invalid mem access",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"bounds check map access with off+size signed 32bit overflow. test1",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0x7ffffffe),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
|
||||
BPF_JMP_A(0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.errstr = "map_value pointer and 2147483646",
|
||||
.result = REJECT
|
||||
},
|
||||
{
|
||||
"bounds check map access with off+size signed 32bit overflow. test2",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0x1fffffff),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0x1fffffff),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0x1fffffff),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
|
||||
BPF_JMP_A(0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.errstr = "pointer offset 1073741822",
|
||||
.errstr_unpriv = "R0 pointer arithmetic of map value goes out of range",
|
||||
.result = REJECT
|
||||
},
|
||||
{
|
||||
"bounds check map access with off+size signed 32bit overflow. test3",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_ALU64_IMM(BPF_SUB, BPF_REG_0, 0x1fffffff),
|
||||
BPF_ALU64_IMM(BPF_SUB, BPF_REG_0, 0x1fffffff),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 2),
|
||||
BPF_JMP_A(0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.errstr = "pointer offset -1073741822",
|
||||
.errstr_unpriv = "R0 pointer arithmetic of map value goes out of range",
|
||||
.result = REJECT
|
||||
},
|
||||
{
|
||||
"bounds check map access with off+size signed 32bit overflow. test4",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_MOV64_IMM(BPF_REG_1, 1000000),
|
||||
BPF_ALU64_IMM(BPF_MUL, BPF_REG_1, 1000000),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 2),
|
||||
BPF_JMP_A(0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.errstr = "map_value pointer and 1000000000000",
|
||||
.result = REJECT
|
||||
},
|
|
@ -0,0 +1,124 @@
|
|||
{
|
||||
"check deducing bounds from const, 1",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 1),
|
||||
BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 1, 0),
|
||||
BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = REJECT,
|
||||
.errstr = "R0 tried to subtract pointer from scalar",
|
||||
},
|
||||
{
|
||||
"check deducing bounds from const, 2",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 1),
|
||||
BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 1, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_JMP_IMM(BPF_JSLE, BPF_REG_0, 1, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.retval = 1,
|
||||
},
|
||||
{
|
||||
"check deducing bounds from const, 3",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_JMP_IMM(BPF_JSLE, BPF_REG_0, 0, 0),
|
||||
BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = REJECT,
|
||||
.errstr = "R0 tried to subtract pointer from scalar",
|
||||
},
|
||||
{
|
||||
"check deducing bounds from const, 4",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_JMP_IMM(BPF_JSLE, BPF_REG_0, 0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
},
|
||||
{
|
||||
"check deducing bounds from const, 5",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 1, 1),
|
||||
BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = REJECT,
|
||||
.errstr = "R0 tried to subtract pointer from scalar",
|
||||
},
|
||||
{
|
||||
"check deducing bounds from const, 6",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = REJECT,
|
||||
.errstr = "R0 tried to subtract pointer from scalar",
|
||||
},
|
||||
{
|
||||
"check deducing bounds from const, 7",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, ~0),
|
||||
BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 0),
|
||||
BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_0),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, mark)),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = REJECT,
|
||||
.errstr = "dereference of modified ctx ptr",
|
||||
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
|
||||
},
|
||||
{
|
||||
"check deducing bounds from const, 8",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, ~0),
|
||||
BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 1),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, mark)),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = REJECT,
|
||||
.errstr = "dereference of modified ctx ptr",
|
||||
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
|
||||
},
|
||||
{
|
||||
"check deducing bounds from const, 9",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 0),
|
||||
BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = REJECT,
|
||||
.errstr = "R0 tried to subtract pointer from scalar",
|
||||
},
|
||||
{
|
||||
"check deducing bounds from const, 10",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_JMP_IMM(BPF_JSLE, BPF_REG_0, 0, 0),
|
||||
/* Marks reg as unknown. */
|
||||
BPF_ALU64_IMM(BPF_NEG, BPF_REG_0, 0),
|
||||
BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = REJECT,
|
||||
.errstr = "math between ctx pointer and register with unbounded min value is not allowed",
|
||||
},
|
|
@ -0,0 +1,406 @@
|
|||
{
|
||||
"bounds checks mixing signed and unsigned, positive bounds",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
|
||||
BPF_MOV64_IMM(BPF_REG_2, 2),
|
||||
BPF_JMP_REG(BPF_JGE, BPF_REG_2, BPF_REG_1, 3),
|
||||
BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 4, 2),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
|
||||
BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.errstr = "unbounded min value",
|
||||
.errstr_unpriv = "R1 has unknown scalar with mixed signed bounds",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"bounds checks mixing signed and unsigned",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
|
||||
BPF_MOV64_IMM(BPF_REG_2, -1),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 3),
|
||||
BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
|
||||
BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.errstr = "unbounded min value",
|
||||
.errstr_unpriv = "R1 has unknown scalar with mixed signed bounds",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"bounds checks mixing signed and unsigned, variant 2",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
|
||||
BPF_MOV64_IMM(BPF_REG_2, -1),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 5),
|
||||
BPF_MOV64_IMM(BPF_REG_8, 0),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_8, BPF_REG_1),
|
||||
BPF_JMP_IMM(BPF_JSGT, BPF_REG_8, 1, 2),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_8),
|
||||
BPF_ST_MEM(BPF_B, BPF_REG_8, 0, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.errstr = "unbounded min value",
|
||||
.errstr_unpriv = "R8 has unknown scalar with mixed signed bounds",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"bounds checks mixing signed and unsigned, variant 3",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
|
||||
BPF_MOV64_IMM(BPF_REG_2, -1),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 4),
|
||||
BPF_MOV64_REG(BPF_REG_8, BPF_REG_1),
|
||||
BPF_JMP_IMM(BPF_JSGT, BPF_REG_8, 1, 2),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_8),
|
||||
BPF_ST_MEM(BPF_B, BPF_REG_8, 0, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.errstr = "unbounded min value",
|
||||
.errstr_unpriv = "R8 has unknown scalar with mixed signed bounds",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"bounds checks mixing signed and unsigned, variant 4",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
|
||||
BPF_MOV64_IMM(BPF_REG_2, 1),
|
||||
BPF_ALU64_REG(BPF_AND, BPF_REG_1, BPF_REG_2),
|
||||
BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
|
||||
BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.result = ACCEPT,
|
||||
},
|
||||
{
|
||||
"bounds checks mixing signed and unsigned, variant 5",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
|
||||
BPF_MOV64_IMM(BPF_REG_2, -1),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 5),
|
||||
BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 4),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 4),
|
||||
BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
|
||||
BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.errstr = "unbounded min value",
|
||||
.errstr_unpriv = "R1 has unknown scalar with mixed signed bounds",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"bounds checks mixing signed and unsigned, variant 6",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_2, 0),
|
||||
BPF_MOV64_REG(BPF_REG_3, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, -512),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -16),
|
||||
BPF_MOV64_IMM(BPF_REG_6, -1),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_6, 5),
|
||||
BPF_JMP_IMM(BPF_JSGT, BPF_REG_4, 1, 4),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 1),
|
||||
BPF_MOV64_IMM(BPF_REG_5, 0),
|
||||
BPF_ST_MEM(BPF_H, BPF_REG_10, -512, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "R4 min value is negative, either use unsigned",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"bounds checks mixing signed and unsigned, variant 7",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
|
||||
BPF_MOV64_IMM(BPF_REG_2, 1024 * 1024 * 1024),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 3),
|
||||
BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
|
||||
BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.result = ACCEPT,
|
||||
},
|
||||
{
|
||||
"bounds checks mixing signed and unsigned, variant 8",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
|
||||
BPF_MOV64_IMM(BPF_REG_2, -1),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 2),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
|
||||
BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.errstr = "unbounded min value",
|
||||
.errstr_unpriv = "R1 has unknown scalar with mixed signed bounds",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"bounds checks mixing signed and unsigned, variant 9",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 10),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
|
||||
BPF_LD_IMM64(BPF_REG_2, -9223372036854775808ULL),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 2),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
|
||||
BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.result = ACCEPT,
|
||||
},
|
||||
{
|
||||
"bounds checks mixing signed and unsigned, variant 10",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
|
||||
BPF_MOV64_IMM(BPF_REG_2, 0),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 2),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
|
||||
BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.errstr = "unbounded min value",
|
||||
.errstr_unpriv = "R1 has unknown scalar with mixed signed bounds",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"bounds checks mixing signed and unsigned, variant 11",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
|
||||
BPF_MOV64_IMM(BPF_REG_2, -1),
|
||||
BPF_JMP_REG(BPF_JGE, BPF_REG_2, BPF_REG_1, 2),
|
||||
/* Dead branch. */
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
|
||||
BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.errstr = "unbounded min value",
|
||||
.errstr_unpriv = "R1 has unknown scalar with mixed signed bounds",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"bounds checks mixing signed and unsigned, variant 12",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
|
||||
BPF_MOV64_IMM(BPF_REG_2, -6),
|
||||
BPF_JMP_REG(BPF_JGE, BPF_REG_2, BPF_REG_1, 2),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
|
||||
BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.errstr = "unbounded min value",
|
||||
.errstr_unpriv = "R1 has unknown scalar with mixed signed bounds",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"bounds checks mixing signed and unsigned, variant 13",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
|
||||
BPF_MOV64_IMM(BPF_REG_2, 2),
|
||||
BPF_JMP_REG(BPF_JGE, BPF_REG_2, BPF_REG_1, 2),
|
||||
BPF_MOV64_IMM(BPF_REG_7, 1),
|
||||
BPF_JMP_IMM(BPF_JSGT, BPF_REG_7, 0, 2),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_1),
|
||||
BPF_JMP_IMM(BPF_JSGT, BPF_REG_7, 4, 2),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_7),
|
||||
BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.errstr = "unbounded min value",
|
||||
.errstr_unpriv = "R7 has unknown scalar with mixed signed bounds",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"bounds checks mixing signed and unsigned, variant 14",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_9, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, mark)),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
|
||||
BPF_MOV64_IMM(BPF_REG_2, -1),
|
||||
BPF_MOV64_IMM(BPF_REG_8, 2),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_9, 42, 6),
|
||||
BPF_JMP_REG(BPF_JSGT, BPF_REG_8, BPF_REG_1, 3),
|
||||
BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
|
||||
BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, -3),
|
||||
BPF_JMP_IMM(BPF_JA, 0, 0, -7),
|
||||
},
|
||||
.fixup_map_hash_8b = { 4 },
|
||||
.errstr = "unbounded min value",
|
||||
.errstr_unpriv = "R1 has unknown scalar with mixed signed bounds",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"bounds checks mixing signed and unsigned, variant 15",
|
||||
.insns = {
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
|
||||
BPF_MOV64_IMM(BPF_REG_2, -6),
|
||||
BPF_JMP_REG(BPF_JGE, BPF_REG_2, BPF_REG_1, 2),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
|
||||
BPF_JMP_IMM(BPF_JGT, BPF_REG_0, 1, 2),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 3 },
|
||||
.errstr = "unbounded min value",
|
||||
.errstr_unpriv = "R1 has unknown scalar with mixed signed bounds",
|
||||
.result = REJECT,
|
||||
.result_unpriv = REJECT,
|
||||
},
|
|
@ -0,0 +1,44 @@
|
|||
{
|
||||
"bpf_get_stack return R0 within range",
|
||||
.insns = {
|
||||
BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 28),
|
||||
BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
|
||||
BPF_MOV64_IMM(BPF_REG_9, sizeof(struct test_val)),
|
||||
BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_7),
|
||||
BPF_MOV64_IMM(BPF_REG_3, sizeof(struct test_val)),
|
||||
BPF_MOV64_IMM(BPF_REG_4, 256),
|
||||
BPF_EMIT_CALL(BPF_FUNC_get_stack),
|
||||
BPF_MOV64_IMM(BPF_REG_1, 0),
|
||||
BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
|
||||
BPF_ALU64_IMM(BPF_LSH, BPF_REG_8, 32),
|
||||
BPF_ALU64_IMM(BPF_ARSH, BPF_REG_8, 32),
|
||||
BPF_JMP_REG(BPF_JSLT, BPF_REG_1, BPF_REG_8, 16),
|
||||
BPF_ALU64_REG(BPF_SUB, BPF_REG_9, BPF_REG_8),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_7),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_8),
|
||||
BPF_MOV64_REG(BPF_REG_1, BPF_REG_9),
|
||||
BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 32),
|
||||
BPF_ALU64_IMM(BPF_ARSH, BPF_REG_1, 32),
|
||||
BPF_MOV64_REG(BPF_REG_3, BPF_REG_2),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_3, BPF_REG_1),
|
||||
BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
|
||||
BPF_MOV64_IMM(BPF_REG_5, sizeof(struct test_val)),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_5),
|
||||
BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 4),
|
||||
BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
|
||||
BPF_MOV64_REG(BPF_REG_3, BPF_REG_9),
|
||||
BPF_MOV64_IMM(BPF_REG_4, 0),
|
||||
BPF_EMIT_CALL(BPF_FUNC_get_stack),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_48b = { 4 },
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_TRACEPOINT,
|
||||
},
|
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
|
@ -0,0 +1,70 @@
|
|||
{
|
||||
"unreachable",
|
||||
.insns = {
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "unreachable",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"unreachable2",
|
||||
.insns = {
|
||||
BPF_JMP_IMM(BPF_JA, 0, 0, 1),
|
||||
BPF_JMP_IMM(BPF_JA, 0, 0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "unreachable",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"out of range jump",
|
||||
.insns = {
|
||||
BPF_JMP_IMM(BPF_JA, 0, 0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "jump out of range",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"out of range jump2",
|
||||
.insns = {
|
||||
BPF_JMP_IMM(BPF_JA, 0, 0, -2),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "jump out of range",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"loop (back-edge)",
|
||||
.insns = {
|
||||
BPF_JMP_IMM(BPF_JA, 0, 0, -1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "back-edge",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"loop2 (back-edge)",
|
||||
.insns = {
|
||||
BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
|
||||
BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),
|
||||
BPF_JMP_IMM(BPF_JA, 0, 0, -4),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "back-edge",
|
||||
.result = REJECT,
|
||||
},
|
||||
{
|
||||
"conditional loop",
|
||||
.insns = {
|
||||
BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
|
||||
BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),
|
||||
BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, -3),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "back-edge",
|
||||
.result = REJECT,
|
||||
},
|
|
@ -0,0 +1,72 @@
|
|||
{
|
||||
"bpf_exit with invalid return code. test1",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "R0 has value (0x0; 0xffffffff)",
|
||||
.result = REJECT,
|
||||
.prog_type = BPF_PROG_TYPE_CGROUP_SOCK,
|
||||
},
|
||||
{
|
||||
"bpf_exit with invalid return code. test2",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0),
|
||||
BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_CGROUP_SOCK,
|
||||
},
|
||||
{
|
||||
"bpf_exit with invalid return code. test3",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0),
|
||||
BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 3),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "R0 has value (0x0; 0x3)",
|
||||
.result = REJECT,
|
||||
.prog_type = BPF_PROG_TYPE_CGROUP_SOCK,
|
||||
},
|
||||
{
|
||||
"bpf_exit with invalid return code. test4",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_CGROUP_SOCK,
|
||||
},
|
||||
{
|
||||
"bpf_exit with invalid return code. test5",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 2),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "R0 has value (0x2; 0x0)",
|
||||
.result = REJECT,
|
||||
.prog_type = BPF_PROG_TYPE_CGROUP_SOCK,
|
||||
},
|
||||
{
|
||||
"bpf_exit with invalid return code. test6",
|
||||
.insns = {
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "R0 is not a known value (ctx)",
|
||||
.result = REJECT,
|
||||
.prog_type = BPF_PROG_TYPE_CGROUP_SOCK,
|
||||
},
|
||||
{
|
||||
"bpf_exit with invalid return code. test7",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 4),
|
||||
BPF_ALU64_REG(BPF_MUL, BPF_REG_0, BPF_REG_2),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "R0 has unknown scalar value",
|
||||
.result = REJECT,
|
||||
.prog_type = BPF_PROG_TYPE_CGROUP_SOCK,
|
||||
},
|
|
@ -0,0 +1,197 @@
|
|||
{
|
||||
"direct packet read test#1 for CGROUP_SKB",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, len)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_5, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, pkt_type)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, mark)),
|
||||
BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_6,
|
||||
offsetof(struct __sk_buff, mark)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, queue_mapping)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_8, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, protocol)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_9, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, vlan_present)),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.result_unpriv = REJECT,
|
||||
.errstr_unpriv = "invalid bpf_context access off=76 size=4",
|
||||
.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
|
||||
},
|
||||
{
|
||||
"direct packet read test#2 for CGROUP_SKB",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, vlan_tci)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_5, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, vlan_proto)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, priority)),
|
||||
BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_6,
|
||||
offsetof(struct __sk_buff, priority)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, ingress_ifindex)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_8, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, tc_index)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_9, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, hash)),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
|
||||
},
|
||||
{
|
||||
"direct packet read test#3 for CGROUP_SKB",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, cb[0])),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_5, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, cb[1])),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, cb[2])),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, cb[3])),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_8, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, cb[4])),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_9, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, napi_id)),
|
||||
BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_4,
|
||||
offsetof(struct __sk_buff, cb[0])),
|
||||
BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_5,
|
||||
offsetof(struct __sk_buff, cb[1])),
|
||||
BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_6,
|
||||
offsetof(struct __sk_buff, cb[2])),
|
||||
BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_7,
|
||||
offsetof(struct __sk_buff, cb[3])),
|
||||
BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_8,
|
||||
offsetof(struct __sk_buff, cb[4])),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
|
||||
},
|
||||
{
|
||||
"direct packet read test#4 for CGROUP_SKB",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, family)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, remote_ip4)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, local_ip4)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_5, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, remote_ip6[0])),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_5, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, remote_ip6[1])),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_5, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, remote_ip6[2])),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_5, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, remote_ip6[3])),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, local_ip6[0])),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, local_ip6[1])),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, local_ip6[2])),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, local_ip6[3])),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, remote_port)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_8, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, local_port)),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
|
||||
},
|
||||
{
|
||||
"invalid access of tc_classid for CGROUP_SKB",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, tc_classid)),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = REJECT,
|
||||
.errstr = "invalid bpf_context access",
|
||||
.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
|
||||
},
|
||||
{
|
||||
"invalid access of data_meta for CGROUP_SKB",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_meta)),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = REJECT,
|
||||
.errstr = "invalid bpf_context access",
|
||||
.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
|
||||
},
|
||||
{
|
||||
"invalid access of flow_keys for CGROUP_SKB",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, flow_keys)),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = REJECT,
|
||||
.errstr = "invalid bpf_context access",
|
||||
.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
|
||||
},
|
||||
{
|
||||
"invalid write access to napi_id for CGROUP_SKB",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_9, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, napi_id)),
|
||||
BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_9,
|
||||
offsetof(struct __sk_buff, napi_id)),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = REJECT,
|
||||
.errstr = "invalid bpf_context access",
|
||||
.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
|
||||
},
|
||||
{
|
||||
"write tstamp from CGROUP_SKB",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0,
|
||||
offsetof(struct __sk_buff, tstamp)),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.result_unpriv = REJECT,
|
||||
.errstr_unpriv = "invalid bpf_context access off=152 size=8",
|
||||
.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
|
||||
},
|
||||
{
|
||||
"read tstamp from CGROUP_SKB",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, tstamp)),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
|
||||
},
|
|
@ -0,0 +1,220 @@
|
|||
{
|
||||
"valid cgroup storage access",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_2, 0),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
|
||||
BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_cgroup_storage = { 1 },
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
|
||||
},
|
||||
{
|
||||
"invalid cgroup storage access 1",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_2, 0),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
|
||||
BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 1 },
|
||||
.result = REJECT,
|
||||
.errstr = "cannot pass map_type 1 into func bpf_get_local_storage",
|
||||
.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
|
||||
},
|
||||
{
|
||||
"invalid cgroup storage access 2",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_2, 0),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 1),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
|
||||
BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = REJECT,
|
||||
.errstr = "fd 1 is not pointing to valid bpf_map",
|
||||
.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
|
||||
},
|
||||
{
|
||||
"invalid cgroup storage access 3",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_2, 0),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 256),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_cgroup_storage = { 1 },
|
||||
.result = REJECT,
|
||||
.errstr = "invalid access to map value, value_size=64 off=256 size=4",
|
||||
.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
|
||||
},
|
||||
{
|
||||
"invalid cgroup storage access 4",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_2, 0),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, -2),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_cgroup_storage = { 1 },
|
||||
.result = REJECT,
|
||||
.errstr = "invalid access to map value, value_size=64 off=-2 size=4",
|
||||
.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
|
||||
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
|
||||
},
|
||||
{
|
||||
"invalid cgroup storage access 5",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_2, 7),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
|
||||
BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_cgroup_storage = { 1 },
|
||||
.result = REJECT,
|
||||
.errstr = "get_local_storage() doesn't support non-zero flags",
|
||||
.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
|
||||
},
|
||||
{
|
||||
"invalid cgroup storage access 6",
|
||||
.insns = {
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_1),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
|
||||
BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_cgroup_storage = { 1 },
|
||||
.result = REJECT,
|
||||
.errstr = "get_local_storage() doesn't support non-zero flags",
|
||||
.errstr_unpriv = "R2 leaks addr into helper function",
|
||||
.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
|
||||
},
|
||||
{
|
||||
"valid per-cpu cgroup storage access",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_2, 0),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
|
||||
BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_percpu_cgroup_storage = { 1 },
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
|
||||
},
|
||||
{
|
||||
"invalid per-cpu cgroup storage access 1",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_2, 0),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
|
||||
BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_map_hash_8b = { 1 },
|
||||
.result = REJECT,
|
||||
.errstr = "cannot pass map_type 1 into func bpf_get_local_storage",
|
||||
.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
|
||||
},
|
||||
{
|
||||
"invalid per-cpu cgroup storage access 2",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_2, 0),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 1),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
|
||||
BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = REJECT,
|
||||
.errstr = "fd 1 is not pointing to valid bpf_map",
|
||||
.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
|
||||
},
|
||||
{
|
||||
"invalid per-cpu cgroup storage access 3",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_2, 0),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 256),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_percpu_cgroup_storage = { 1 },
|
||||
.result = REJECT,
|
||||
.errstr = "invalid access to map value, value_size=64 off=256 size=4",
|
||||
.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
|
||||
},
|
||||
{
|
||||
"invalid per-cpu cgroup storage access 4",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_2, 0),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, -2),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_cgroup_storage = { 1 },
|
||||
.result = REJECT,
|
||||
.errstr = "invalid access to map value, value_size=64 off=-2 size=4",
|
||||
.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
|
||||
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
|
||||
},
|
||||
{
|
||||
"invalid per-cpu cgroup storage access 5",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_2, 7),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
|
||||
BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_percpu_cgroup_storage = { 1 },
|
||||
.result = REJECT,
|
||||
.errstr = "get_local_storage() doesn't support non-zero flags",
|
||||
.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
|
||||
},
|
||||
{
|
||||
"invalid per-cpu cgroup storage access 6",
|
||||
.insns = {
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_1),
|
||||
BPF_LD_MAP_FD(BPF_REG_1, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
|
||||
BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.fixup_percpu_cgroup_storage = { 1 },
|
||||
.result = REJECT,
|
||||
.errstr = "get_local_storage() doesn't support non-zero flags",
|
||||
.errstr_unpriv = "R2 leaks addr into helper function",
|
||||
.prog_type = BPF_PROG_TYPE_CGROUP_SKB,
|
||||
},
|
|
@ -0,0 +1,60 @@
|
|||
{
|
||||
"constant register |= constant should keep constant type",
|
||||
.insns = {
|
||||
BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -48),
|
||||
BPF_MOV64_IMM(BPF_REG_2, 34),
|
||||
BPF_ALU64_IMM(BPF_OR, BPF_REG_2, 13),
|
||||
BPF_MOV64_IMM(BPF_REG_3, 0),
|
||||
BPF_EMIT_CALL(BPF_FUNC_probe_read),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_TRACEPOINT,
|
||||
},
|
||||
{
|
||||
"constant register |= constant should not bypass stack boundary checks",
|
||||
.insns = {
|
||||
BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -48),
|
||||
BPF_MOV64_IMM(BPF_REG_2, 34),
|
||||
BPF_ALU64_IMM(BPF_OR, BPF_REG_2, 24),
|
||||
BPF_MOV64_IMM(BPF_REG_3, 0),
|
||||
BPF_EMIT_CALL(BPF_FUNC_probe_read),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "invalid stack type R1 off=-48 access_size=58",
|
||||
.result = REJECT,
|
||||
.prog_type = BPF_PROG_TYPE_TRACEPOINT,
|
||||
},
|
||||
{
|
||||
"constant register |= constant register should keep constant type",
|
||||
.insns = {
|
||||
BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -48),
|
||||
BPF_MOV64_IMM(BPF_REG_2, 34),
|
||||
BPF_MOV64_IMM(BPF_REG_4, 13),
|
||||
BPF_ALU64_REG(BPF_OR, BPF_REG_2, BPF_REG_4),
|
||||
BPF_MOV64_IMM(BPF_REG_3, 0),
|
||||
BPF_EMIT_CALL(BPF_FUNC_probe_read),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_TRACEPOINT,
|
||||
},
|
||||
{
|
||||
"constant register |= constant register should not bypass stack boundary checks",
|
||||
.insns = {
|
||||
BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -48),
|
||||
BPF_MOV64_IMM(BPF_REG_2, 34),
|
||||
BPF_MOV64_IMM(BPF_REG_4, 24),
|
||||
BPF_ALU64_REG(BPF_OR, BPF_REG_2, BPF_REG_4),
|
||||
BPF_MOV64_IMM(BPF_REG_3, 0),
|
||||
BPF_EMIT_CALL(BPF_FUNC_probe_read),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "invalid stack type R1 off=-48 access_size=58",
|
||||
.result = REJECT,
|
||||
.prog_type = BPF_PROG_TYPE_TRACEPOINT,
|
||||
},
|
|
@ -0,0 +1,93 @@
|
|||
{
|
||||
"context stores via ST",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_ST_MEM(BPF_DW, BPF_REG_1, offsetof(struct __sk_buff, mark), 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "BPF_ST stores into R1 ctx is not allowed",
|
||||
.result = REJECT,
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
},
|
||||
{
|
||||
"context stores via XADD",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_RAW_INSN(BPF_STX | BPF_XADD | BPF_W, BPF_REG_1,
|
||||
BPF_REG_0, offsetof(struct __sk_buff, mark), 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "BPF_XADD stores into R1 ctx is not allowed",
|
||||
.result = REJECT,
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
},
|
||||
{
|
||||
"arithmetic ops make PTR_TO_CTX unusable",
|
||||
.insns = {
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data) -
|
||||
offsetof(struct __sk_buff, mark)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, mark)),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "dereference of modified ctx ptr",
|
||||
.result = REJECT,
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
},
|
||||
{
|
||||
"pass unmodified ctx pointer to helper",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_2, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
|
||||
BPF_FUNC_csum_update),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.result = ACCEPT,
|
||||
},
|
||||
{
|
||||
"pass modified ctx pointer to helper, 1",
|
||||
.insns = {
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -612),
|
||||
BPF_MOV64_IMM(BPF_REG_2, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
|
||||
BPF_FUNC_csum_update),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.result = REJECT,
|
||||
.errstr = "dereference of modified ctx ptr",
|
||||
},
|
||||
{
|
||||
"pass modified ctx pointer to helper, 2",
|
||||
.insns = {
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -612),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
|
||||
BPF_FUNC_get_socket_cookie),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result_unpriv = REJECT,
|
||||
.result = REJECT,
|
||||
.errstr_unpriv = "dereference of modified ctx ptr",
|
||||
.errstr = "dereference of modified ctx ptr",
|
||||
},
|
||||
{
|
||||
"pass modified ctx pointer to helper, 3",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 0),
|
||||
BPF_ALU64_IMM(BPF_AND, BPF_REG_3, 4),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3),
|
||||
BPF_MOV64_IMM(BPF_REG_2, 0),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
|
||||
BPF_FUNC_csum_update),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.result = REJECT,
|
||||
.errstr = "variable ctx access var_off=(0x0; 0x4)",
|
||||
},
|
|
@ -0,0 +1,180 @@
|
|||
{
|
||||
"valid access family in SK_MSG",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
|
||||
offsetof(struct sk_msg_md, family)),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_SK_MSG,
|
||||
},
|
||||
{
|
||||
"valid access remote_ip4 in SK_MSG",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
|
||||
offsetof(struct sk_msg_md, remote_ip4)),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_SK_MSG,
|
||||
},
|
||||
{
|
||||
"valid access local_ip4 in SK_MSG",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
|
||||
offsetof(struct sk_msg_md, local_ip4)),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_SK_MSG,
|
||||
},
|
||||
{
|
||||
"valid access remote_port in SK_MSG",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
|
||||
offsetof(struct sk_msg_md, remote_port)),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_SK_MSG,
|
||||
},
|
||||
{
|
||||
"valid access local_port in SK_MSG",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
|
||||
offsetof(struct sk_msg_md, local_port)),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_SK_MSG,
|
||||
},
|
||||
{
|
||||
"valid access remote_ip6 in SK_MSG",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
|
||||
offsetof(struct sk_msg_md, remote_ip6[0])),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
|
||||
offsetof(struct sk_msg_md, remote_ip6[1])),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
|
||||
offsetof(struct sk_msg_md, remote_ip6[2])),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
|
||||
offsetof(struct sk_msg_md, remote_ip6[3])),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_SK_SKB,
|
||||
},
|
||||
{
|
||||
"valid access local_ip6 in SK_MSG",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
|
||||
offsetof(struct sk_msg_md, local_ip6[0])),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
|
||||
offsetof(struct sk_msg_md, local_ip6[1])),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
|
||||
offsetof(struct sk_msg_md, local_ip6[2])),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
|
||||
offsetof(struct sk_msg_md, local_ip6[3])),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_SK_SKB,
|
||||
},
|
||||
{
|
||||
"valid access size in SK_MSG",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
|
||||
offsetof(struct sk_msg_md, size)),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_SK_MSG,
|
||||
},
|
||||
{
|
||||
"invalid 64B read of size in SK_MSG",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct sk_msg_md, size)),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "invalid bpf_context access",
|
||||
.result = REJECT,
|
||||
.prog_type = BPF_PROG_TYPE_SK_MSG,
|
||||
},
|
||||
{
|
||||
"invalid read past end of SK_MSG",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct sk_msg_md, size) + 4),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "invalid bpf_context access",
|
||||
.result = REJECT,
|
||||
.prog_type = BPF_PROG_TYPE_SK_MSG,
|
||||
},
|
||||
{
|
||||
"invalid read offset in SK_MSG",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct sk_msg_md, family) + 1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "invalid bpf_context access",
|
||||
.result = REJECT,
|
||||
.prog_type = BPF_PROG_TYPE_SK_MSG,
|
||||
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
|
||||
},
|
||||
{
|
||||
"direct packet read for SK_MSG",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct sk_msg_md, data)),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct sk_msg_md, data_end)),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_SK_MSG,
|
||||
},
|
||||
{
|
||||
"direct packet write for SK_MSG",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct sk_msg_md, data)),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct sk_msg_md, data_end)),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
|
||||
BPF_STX_MEM(BPF_B, BPF_REG_2, BPF_REG_2, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_SK_MSG,
|
||||
},
|
||||
{
|
||||
"overlapping checks for direct packet access SK_MSG",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct sk_msg_md, data)),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct sk_msg_md, data_end)),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 4),
|
||||
BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
|
||||
BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_2, 6),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_SK_MSG,
|
||||
},
|
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
|
@ -0,0 +1,159 @@
|
|||
{
|
||||
"dead code: start",
|
||||
.insns = {
|
||||
BPF_JMP_IMM(BPF_JA, 0, 0, 2),
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_8, BPF_REG_9, 0),
|
||||
BPF_JMP_IMM(BPF_JA, 0, 0, 2),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 7),
|
||||
BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 10, -4),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.retval = 7,
|
||||
},
|
||||
{
|
||||
"dead code: mid 1",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 7),
|
||||
BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 0, 1),
|
||||
BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 10, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.retval = 7,
|
||||
},
|
||||
{
|
||||
"dead code: mid 2",
|
||||
.insns = {
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
|
||||
BPF_JMP_IMM(BPF_JSET, BPF_REG_0, 1, 4),
|
||||
BPF_JMP_IMM(BPF_JSET, BPF_REG_0, 1, 1),
|
||||
BPF_JMP_IMM(BPF_JA, 0, 0, 2),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 7),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.retval = 1,
|
||||
},
|
||||
{
|
||||
"dead code: end 1",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 7),
|
||||
BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 10, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.retval = 7,
|
||||
},
|
||||
{
|
||||
"dead code: end 2",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 7),
|
||||
BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 10, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 12),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.retval = 7,
|
||||
},
|
||||
{
|
||||
"dead code: end 3",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 7),
|
||||
BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 8, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 10, 1),
|
||||
BPF_JMP_IMM(BPF_JA, 0, 0, 1),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 12),
|
||||
BPF_JMP_IMM(BPF_JA, 0, 0, -5),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.retval = 7,
|
||||
},
|
||||
{
|
||||
"dead code: tail of main + func",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 7),
|
||||
BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 8, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 12),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr_unpriv = "function calls to other bpf functions are allowed for root only",
|
||||
.result_unpriv = REJECT,
|
||||
.result = ACCEPT,
|
||||
.retval = 7,
|
||||
},
|
||||
{
|
||||
"dead code: tail of main + two functions",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 7),
|
||||
BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 8, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 12),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr_unpriv = "function calls to other bpf functions are allowed for root only",
|
||||
.result_unpriv = REJECT,
|
||||
.result = ACCEPT,
|
||||
.retval = 7,
|
||||
},
|
||||
{
|
||||
"dead code: function in the middle and mid of another func",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_1, 7),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 3),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 12),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 7),
|
||||
BPF_JMP_IMM(BPF_JGE, BPF_REG_1, 7, 1),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, -5),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr_unpriv = "function calls to other bpf functions are allowed for root only",
|
||||
.result_unpriv = REJECT,
|
||||
.result = ACCEPT,
|
||||
.retval = 7,
|
||||
},
|
||||
{
|
||||
"dead code: middle of main before call",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_1, 2),
|
||||
BPF_JMP_IMM(BPF_JGE, BPF_REG_1, 2, 1),
|
||||
BPF_MOV64_IMM(BPF_REG_1, 5),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr_unpriv = "function calls to other bpf functions are allowed for root only",
|
||||
.result_unpriv = REJECT,
|
||||
.result = ACCEPT,
|
||||
.retval = 2,
|
||||
},
|
||||
{
|
||||
"dead code: start of a function",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_1, 2),
|
||||
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_JMP_IMM(BPF_JA, 0, 0, 0),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr_unpriv = "function calls to other bpf functions are allowed for root only",
|
||||
.result_unpriv = REJECT,
|
||||
.result = ACCEPT,
|
||||
.retval = 2,
|
||||
},
|
|
@ -0,0 +1,633 @@
|
|||
{
|
||||
"pkt_end - pkt_start is allowed",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_2),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.retval = TEST_DATA_LEN,
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
},
|
||||
{
|
||||
"direct packet access: test1",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
},
|
||||
{
|
||||
"direct packet access: test2",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_0, 1),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_MOV64_REG(BPF_REG_5, BPF_REG_3),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 14),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_5, BPF_REG_4, 15),
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_3, 7),
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_3, 12),
|
||||
BPF_ALU64_IMM(BPF_MUL, BPF_REG_4, 14),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_3, BPF_REG_4),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, len)),
|
||||
BPF_ALU64_IMM(BPF_LSH, BPF_REG_2, 49),
|
||||
BPF_ALU64_IMM(BPF_RSH, BPF_REG_2, 49),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_3, BPF_REG_2),
|
||||
BPF_MOV64_REG(BPF_REG_2, BPF_REG_3),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 8),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 1),
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_3, 4),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
},
|
||||
{
|
||||
"direct packet access: test3",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "invalid bpf_context access off=76",
|
||||
.result = REJECT,
|
||||
.prog_type = BPF_PROG_TYPE_SOCKET_FILTER,
|
||||
},
|
||||
{
|
||||
"direct packet access: test4 (write)",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
|
||||
BPF_STX_MEM(BPF_B, BPF_REG_2, BPF_REG_2, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
},
|
||||
{
|
||||
"direct packet access: test5 (pkt_end >= reg, good access)",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
|
||||
BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_0, 2),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
},
|
||||
{
|
||||
"direct packet access: test6 (pkt_end >= reg, bad access)",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
|
||||
BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_0, 3),
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "invalid access to packet",
|
||||
.result = REJECT,
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
},
|
||||
{
|
||||
"direct packet access: test7 (pkt_end >= reg, both accesses)",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
|
||||
BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_0, 3),
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "invalid access to packet",
|
||||
.result = REJECT,
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
},
|
||||
{
|
||||
"direct packet access: test8 (double test, variant 1)",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
|
||||
BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_0, 4),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
},
|
||||
{
|
||||
"direct packet access: test9 (double test, variant 2)",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
|
||||
BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_0, 2),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
},
|
||||
{
|
||||
"direct packet access: test10 (write invalid)",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 2),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_STX_MEM(BPF_B, BPF_REG_2, BPF_REG_2, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "invalid access to packet",
|
||||
.result = REJECT,
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
},
|
||||
{
|
||||
"direct packet access: test11 (shift, good access)",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 22),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 8),
|
||||
BPF_MOV64_IMM(BPF_REG_3, 144),
|
||||
BPF_MOV64_REG(BPF_REG_5, BPF_REG_3),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 23),
|
||||
BPF_ALU64_IMM(BPF_RSH, BPF_REG_5, 3),
|
||||
BPF_MOV64_REG(BPF_REG_6, BPF_REG_2),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_6, BPF_REG_5),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.retval = 1,
|
||||
},
|
||||
{
|
||||
"direct packet access: test12 (and, good access)",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 22),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 8),
|
||||
BPF_MOV64_IMM(BPF_REG_3, 144),
|
||||
BPF_MOV64_REG(BPF_REG_5, BPF_REG_3),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 23),
|
||||
BPF_ALU64_IMM(BPF_AND, BPF_REG_5, 15),
|
||||
BPF_MOV64_REG(BPF_REG_6, BPF_REG_2),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_6, BPF_REG_5),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.retval = 1,
|
||||
},
|
||||
{
|
||||
"direct packet access: test13 (branches, good access)",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 22),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 13),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, mark)),
|
||||
BPF_MOV64_IMM(BPF_REG_4, 1),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_4, 2),
|
||||
BPF_MOV64_IMM(BPF_REG_3, 14),
|
||||
BPF_JMP_IMM(BPF_JA, 0, 0, 1),
|
||||
BPF_MOV64_IMM(BPF_REG_3, 24),
|
||||
BPF_MOV64_REG(BPF_REG_5, BPF_REG_3),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 23),
|
||||
BPF_ALU64_IMM(BPF_AND, BPF_REG_5, 15),
|
||||
BPF_MOV64_REG(BPF_REG_6, BPF_REG_2),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_6, BPF_REG_5),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.retval = 1,
|
||||
},
|
||||
{
|
||||
"direct packet access: test14 (pkt_ptr += 0, CONST_IMM, good access)",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 22),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 7),
|
||||
BPF_MOV64_IMM(BPF_REG_5, 12),
|
||||
BPF_ALU64_IMM(BPF_RSH, BPF_REG_5, 4),
|
||||
BPF_MOV64_REG(BPF_REG_6, BPF_REG_2),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_6, BPF_REG_5),
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_6, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.retval = 1,
|
||||
},
|
||||
{
|
||||
"direct packet access: test15 (spill with xadd)",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 8),
|
||||
BPF_MOV64_IMM(BPF_REG_5, 4096),
|
||||
BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -8),
|
||||
BPF_STX_MEM(BPF_DW, BPF_REG_4, BPF_REG_2, 0),
|
||||
BPF_STX_XADD(BPF_DW, BPF_REG_4, BPF_REG_5, 0),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_4, 0),
|
||||
BPF_STX_MEM(BPF_W, BPF_REG_2, BPF_REG_5, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "R2 invalid mem access 'inv'",
|
||||
.result = REJECT,
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
|
||||
},
|
||||
{
|
||||
"direct packet access: test16 (arith on data_end)",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, 16),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
|
||||
BPF_STX_MEM(BPF_B, BPF_REG_2, BPF_REG_2, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "R3 pointer arithmetic on pkt_end",
|
||||
.result = REJECT,
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
},
|
||||
{
|
||||
"direct packet access: test17 (pruning, alignment)",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, mark)),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 14),
|
||||
BPF_JMP_IMM(BPF_JGT, BPF_REG_7, 1, 4),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
|
||||
BPF_STX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, -4),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
|
||||
BPF_JMP_A(-6),
|
||||
},
|
||||
.errstr = "misaligned packet access off 2+(0x0; 0x0)+15+-4 size 4",
|
||||
.result = REJECT,
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.flags = F_LOAD_WITH_STRICT_ALIGNMENT,
|
||||
},
|
||||
{
|
||||
"direct packet access: test18 (imm += pkt_ptr, 1)",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 8),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_2),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
|
||||
BPF_STX_MEM(BPF_B, BPF_REG_2, BPF_REG_2, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
},
|
||||
{
|
||||
"direct packet access: test19 (imm += pkt_ptr, 2)",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 3),
|
||||
BPF_MOV64_IMM(BPF_REG_4, 4),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_4, BPF_REG_2),
|
||||
BPF_STX_MEM(BPF_B, BPF_REG_4, BPF_REG_4, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
},
|
||||
{
|
||||
"direct packet access: test20 (x += pkt_ptr, 1)",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0xffffffff),
|
||||
BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8),
|
||||
BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 0x7fff),
|
||||
BPF_MOV64_REG(BPF_REG_4, BPF_REG_0),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_4, BPF_REG_2),
|
||||
BPF_MOV64_REG(BPF_REG_5, BPF_REG_4),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 0x7fff - 1),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 1),
|
||||
BPF_STX_MEM(BPF_DW, BPF_REG_5, BPF_REG_4, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.result = ACCEPT,
|
||||
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
|
||||
},
|
||||
{
|
||||
"direct packet access: test21 (x += pkt_ptr, 2)",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 9),
|
||||
BPF_MOV64_IMM(BPF_REG_4, 0xffffffff),
|
||||
BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_4, -8),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -8),
|
||||
BPF_ALU64_IMM(BPF_AND, BPF_REG_4, 0x7fff),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_4, BPF_REG_2),
|
||||
BPF_MOV64_REG(BPF_REG_5, BPF_REG_4),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 0x7fff - 1),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 1),
|
||||
BPF_STX_MEM(BPF_DW, BPF_REG_5, BPF_REG_4, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.result = ACCEPT,
|
||||
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
|
||||
},
|
||||
{
|
||||
"direct packet access: test22 (x += pkt_ptr, 3)",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
|
||||
BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -8),
|
||||
BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_3, -16),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_10, -16),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 11),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_10, -8),
|
||||
BPF_MOV64_IMM(BPF_REG_4, 0xffffffff),
|
||||
BPF_STX_XADD(BPF_DW, BPF_REG_10, BPF_REG_4, -8),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -8),
|
||||
BPF_ALU64_IMM(BPF_RSH, BPF_REG_4, 49),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_4, BPF_REG_2),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_4),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 2),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 2),
|
||||
BPF_MOV64_IMM(BPF_REG_2, 1),
|
||||
BPF_STX_MEM(BPF_H, BPF_REG_4, BPF_REG_2, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.result = ACCEPT,
|
||||
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
|
||||
},
|
||||
{
|
||||
"direct packet access: test23 (x += pkt_ptr, 4)",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0xffffffff),
|
||||
BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8),
|
||||
BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 0xffff),
|
||||
BPF_MOV64_REG(BPF_REG_4, BPF_REG_0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 31),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_2),
|
||||
BPF_MOV64_REG(BPF_REG_5, BPF_REG_0),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0xffff - 1),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
|
||||
BPF_STX_MEM(BPF_DW, BPF_REG_5, BPF_REG_0, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.result = REJECT,
|
||||
.errstr = "invalid access to packet, off=0 size=8, R5(id=1,off=0,r=0)",
|
||||
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
|
||||
},
|
||||
{
|
||||
"direct packet access: test24 (x += pkt_ptr, 5)",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0xffffffff),
|
||||
BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8),
|
||||
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8),
|
||||
BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 0xff),
|
||||
BPF_MOV64_REG(BPF_REG_4, BPF_REG_0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 64),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4),
|
||||
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_2),
|
||||
BPF_MOV64_REG(BPF_REG_5, BPF_REG_0),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0x7fff - 1),
|
||||
BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
|
||||
BPF_STX_MEM(BPF_DW, BPF_REG_5, BPF_REG_0, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.result = ACCEPT,
|
||||
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
|
||||
},
|
||||
{
|
||||
"direct packet access: test25 (marking on <, good access)",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
|
||||
BPF_JMP_REG(BPF_JLT, BPF_REG_0, BPF_REG_3, 2),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
|
||||
BPF_JMP_IMM(BPF_JA, 0, 0, -4),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
},
|
||||
{
|
||||
"direct packet access: test26 (marking on <, bad access)",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
|
||||
BPF_JMP_REG(BPF_JLT, BPF_REG_0, BPF_REG_3, 3),
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_JMP_IMM(BPF_JA, 0, 0, -3),
|
||||
},
|
||||
.result = REJECT,
|
||||
.errstr = "invalid access to packet",
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
},
|
||||
{
|
||||
"direct packet access: test27 (marking on <=, good access)",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
|
||||
BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_0, 1),
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.retval = 1,
|
||||
},
|
||||
{
|
||||
"direct packet access: test28 (marking on <=, bad access)",
|
||||
.insns = {
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data)),
|
||||
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
|
||||
offsetof(struct __sk_buff, data_end)),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
|
||||
BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_0, 2),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
|
||||
BPF_JMP_IMM(BPF_JA, 0, 0, -4),
|
||||
},
|
||||
.result = REJECT,
|
||||
.errstr = "invalid access to packet",
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
},
|
|
@ -0,0 +1,40 @@
|
|||
{
|
||||
"direct stack access with 32-bit wraparound. test1",
|
||||
.insns = {
|
||||
BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x7fffffff),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x7fffffff),
|
||||
BPF_MOV32_IMM(BPF_REG_0, 0),
|
||||
BPF_STX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "fp pointer and 2147483647",
|
||||
.result = REJECT
|
||||
},
|
||||
{
|
||||
"direct stack access with 32-bit wraparound. test2",
|
||||
.insns = {
|
||||
BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x3fffffff),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x3fffffff),
|
||||
BPF_MOV32_IMM(BPF_REG_0, 0),
|
||||
BPF_STX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "fp pointer and 1073741823",
|
||||
.result = REJECT
|
||||
},
|
||||
{
|
||||
"direct stack access with 32-bit wraparound. test3",
|
||||
.insns = {
|
||||
BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x1fffffff),
|
||||
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x1fffffff),
|
||||
BPF_MOV32_IMM(BPF_REG_0, 0),
|
||||
BPF_STX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.errstr = "fp pointer offset 1073741822",
|
||||
.errstr_unpriv = "R1 stack pointer arithmetic goes out of range",
|
||||
.result = REJECT
|
||||
},
|
|
@ -0,0 +1,184 @@
|
|||
{
|
||||
"DIV32 by 0, zero check 1",
|
||||
.insns = {
|
||||
BPF_MOV32_IMM(BPF_REG_0, 42),
|
||||
BPF_MOV32_IMM(BPF_REG_1, 0),
|
||||
BPF_MOV32_IMM(BPF_REG_2, 1),
|
||||
BPF_ALU32_REG(BPF_DIV, BPF_REG_2, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.retval = 42,
|
||||
},
|
||||
{
|
||||
"DIV32 by 0, zero check 2",
|
||||
.insns = {
|
||||
BPF_MOV32_IMM(BPF_REG_0, 42),
|
||||
BPF_LD_IMM64(BPF_REG_1, 0xffffffff00000000LL),
|
||||
BPF_MOV32_IMM(BPF_REG_2, 1),
|
||||
BPF_ALU32_REG(BPF_DIV, BPF_REG_2, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.retval = 42,
|
||||
},
|
||||
{
|
||||
"DIV64 by 0, zero check",
|
||||
.insns = {
|
||||
BPF_MOV32_IMM(BPF_REG_0, 42),
|
||||
BPF_MOV32_IMM(BPF_REG_1, 0),
|
||||
BPF_MOV32_IMM(BPF_REG_2, 1),
|
||||
BPF_ALU64_REG(BPF_DIV, BPF_REG_2, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.retval = 42,
|
||||
},
|
||||
{
|
||||
"MOD32 by 0, zero check 1",
|
||||
.insns = {
|
||||
BPF_MOV32_IMM(BPF_REG_0, 42),
|
||||
BPF_MOV32_IMM(BPF_REG_1, 0),
|
||||
BPF_MOV32_IMM(BPF_REG_2, 1),
|
||||
BPF_ALU32_REG(BPF_MOD, BPF_REG_2, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.retval = 42,
|
||||
},
|
||||
{
|
||||
"MOD32 by 0, zero check 2",
|
||||
.insns = {
|
||||
BPF_MOV32_IMM(BPF_REG_0, 42),
|
||||
BPF_LD_IMM64(BPF_REG_1, 0xffffffff00000000LL),
|
||||
BPF_MOV32_IMM(BPF_REG_2, 1),
|
||||
BPF_ALU32_REG(BPF_MOD, BPF_REG_2, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.retval = 42,
|
||||
},
|
||||
{
|
||||
"MOD64 by 0, zero check",
|
||||
.insns = {
|
||||
BPF_MOV32_IMM(BPF_REG_0, 42),
|
||||
BPF_MOV32_IMM(BPF_REG_1, 0),
|
||||
BPF_MOV32_IMM(BPF_REG_2, 1),
|
||||
BPF_ALU64_REG(BPF_MOD, BPF_REG_2, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.result = ACCEPT,
|
||||
.retval = 42,
|
||||
},
|
||||
{
|
||||
"DIV32 by 0, zero check ok, cls",
|
||||
.insns = {
|
||||
BPF_MOV32_IMM(BPF_REG_0, 42),
|
||||
BPF_MOV32_IMM(BPF_REG_1, 2),
|
||||
BPF_MOV32_IMM(BPF_REG_2, 16),
|
||||
BPF_ALU32_REG(BPF_DIV, BPF_REG_2, BPF_REG_1),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.result = ACCEPT,
|
||||
.retval = 8,
|
||||
},
|
||||
{
|
||||
"DIV32 by 0, zero check 1, cls",
|
||||
.insns = {
|
||||
BPF_MOV32_IMM(BPF_REG_1, 0),
|
||||
BPF_MOV32_IMM(BPF_REG_0, 1),
|
||||
BPF_ALU32_REG(BPF_DIV, BPF_REG_0, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.result = ACCEPT,
|
||||
.retval = 0,
|
||||
},
|
||||
{
|
||||
"DIV32 by 0, zero check 2, cls",
|
||||
.insns = {
|
||||
BPF_LD_IMM64(BPF_REG_1, 0xffffffff00000000LL),
|
||||
BPF_MOV32_IMM(BPF_REG_0, 1),
|
||||
BPF_ALU32_REG(BPF_DIV, BPF_REG_0, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.result = ACCEPT,
|
||||
.retval = 0,
|
||||
},
|
||||
{
|
||||
"DIV64 by 0, zero check, cls",
|
||||
.insns = {
|
||||
BPF_MOV32_IMM(BPF_REG_1, 0),
|
||||
BPF_MOV32_IMM(BPF_REG_0, 1),
|
||||
BPF_ALU64_REG(BPF_DIV, BPF_REG_0, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.result = ACCEPT,
|
||||
.retval = 0,
|
||||
},
|
||||
{
|
||||
"MOD32 by 0, zero check ok, cls",
|
||||
.insns = {
|
||||
BPF_MOV32_IMM(BPF_REG_0, 42),
|
||||
BPF_MOV32_IMM(BPF_REG_1, 3),
|
||||
BPF_MOV32_IMM(BPF_REG_2, 5),
|
||||
BPF_ALU32_REG(BPF_MOD, BPF_REG_2, BPF_REG_1),
|
||||
BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.result = ACCEPT,
|
||||
.retval = 2,
|
||||
},
|
||||
{
|
||||
"MOD32 by 0, zero check 1, cls",
|
||||
.insns = {
|
||||
BPF_MOV32_IMM(BPF_REG_1, 0),
|
||||
BPF_MOV32_IMM(BPF_REG_0, 1),
|
||||
BPF_ALU32_REG(BPF_MOD, BPF_REG_0, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.result = ACCEPT,
|
||||
.retval = 1,
|
||||
},
|
||||
{
|
||||
"MOD32 by 0, zero check 2, cls",
|
||||
.insns = {
|
||||
BPF_LD_IMM64(BPF_REG_1, 0xffffffff00000000LL),
|
||||
BPF_MOV32_IMM(BPF_REG_0, 1),
|
||||
BPF_ALU32_REG(BPF_MOD, BPF_REG_0, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.result = ACCEPT,
|
||||
.retval = 1,
|
||||
},
|
||||
{
|
||||
"MOD64 by 0, zero check 1, cls",
|
||||
.insns = {
|
||||
BPF_MOV32_IMM(BPF_REG_1, 0),
|
||||
BPF_MOV32_IMM(BPF_REG_0, 2),
|
||||
BPF_ALU64_REG(BPF_MOD, BPF_REG_0, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.result = ACCEPT,
|
||||
.retval = 2,
|
||||
},
|
||||
{
|
||||
"MOD64 by 0, zero check 2, cls",
|
||||
.insns = {
|
||||
BPF_MOV32_IMM(BPF_REG_1, 0),
|
||||
BPF_MOV32_IMM(BPF_REG_0, -1),
|
||||
BPF_ALU64_REG(BPF_MOD, BPF_REG_0, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.result = ACCEPT,
|
||||
.retval = -1,
|
||||
},
|
|
@ -0,0 +1,104 @@
|
|||
/* Just make sure that JITs used udiv/umod as otherwise we get
|
||||
* an exception from INT_MIN/-1 overflow similarly as with div
|
||||
* by zero.
|
||||
*/
|
||||
{
|
||||
"DIV32 overflow, check 1",
|
||||
.insns = {
|
||||
BPF_MOV32_IMM(BPF_REG_1, -1),
|
||||
BPF_MOV32_IMM(BPF_REG_0, INT_MIN),
|
||||
BPF_ALU32_REG(BPF_DIV, BPF_REG_0, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.result = ACCEPT,
|
||||
.retval = 0,
|
||||
},
|
||||
{
|
||||
"DIV32 overflow, check 2",
|
||||
.insns = {
|
||||
BPF_MOV32_IMM(BPF_REG_0, INT_MIN),
|
||||
BPF_ALU32_IMM(BPF_DIV, BPF_REG_0, -1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.result = ACCEPT,
|
||||
.retval = 0,
|
||||
},
|
||||
{
|
||||
"DIV64 overflow, check 1",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_1, -1),
|
||||
BPF_LD_IMM64(BPF_REG_0, LLONG_MIN),
|
||||
BPF_ALU64_REG(BPF_DIV, BPF_REG_0, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.result = ACCEPT,
|
||||
.retval = 0,
|
||||
},
|
||||
{
|
||||
"DIV64 overflow, check 2",
|
||||
.insns = {
|
||||
BPF_LD_IMM64(BPF_REG_0, LLONG_MIN),
|
||||
BPF_ALU64_IMM(BPF_DIV, BPF_REG_0, -1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.result = ACCEPT,
|
||||
.retval = 0,
|
||||
},
|
||||
{
|
||||
"MOD32 overflow, check 1",
|
||||
.insns = {
|
||||
BPF_MOV32_IMM(BPF_REG_1, -1),
|
||||
BPF_MOV32_IMM(BPF_REG_0, INT_MIN),
|
||||
BPF_ALU32_REG(BPF_MOD, BPF_REG_0, BPF_REG_1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.result = ACCEPT,
|
||||
.retval = INT_MIN,
|
||||
},
|
||||
{
|
||||
"MOD32 overflow, check 2",
|
||||
.insns = {
|
||||
BPF_MOV32_IMM(BPF_REG_0, INT_MIN),
|
||||
BPF_ALU32_IMM(BPF_MOD, BPF_REG_0, -1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.result = ACCEPT,
|
||||
.retval = INT_MIN,
|
||||
},
|
||||
{
|
||||
"MOD64 overflow, check 1",
|
||||
.insns = {
|
||||
BPF_MOV64_IMM(BPF_REG_1, -1),
|
||||
BPF_LD_IMM64(BPF_REG_2, LLONG_MIN),
|
||||
BPF_MOV64_REG(BPF_REG_3, BPF_REG_2),
|
||||
BPF_ALU64_REG(BPF_MOD, BPF_REG_2, BPF_REG_1),
|
||||
BPF_MOV32_IMM(BPF_REG_0, 0),
|
||||
BPF_JMP_REG(BPF_JNE, BPF_REG_3, BPF_REG_2, 1),
|
||||
BPF_MOV32_IMM(BPF_REG_0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.result = ACCEPT,
|
||||
.retval = 1,
|
||||
},
|
||||
{
|
||||
"MOD64 overflow, check 2",
|
||||
.insns = {
|
||||
BPF_LD_IMM64(BPF_REG_2, LLONG_MIN),
|
||||
BPF_MOV64_REG(BPF_REG_3, BPF_REG_2),
|
||||
BPF_ALU64_IMM(BPF_MOD, BPF_REG_2, -1),
|
||||
BPF_MOV32_IMM(BPF_REG_0, 0),
|
||||
BPF_JMP_REG(BPF_JNE, BPF_REG_3, BPF_REG_2, 1),
|
||||
BPF_MOV32_IMM(BPF_REG_0, 1),
|
||||
BPF_EXIT_INSN(),
|
||||
},
|
||||
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
||||
.result = ACCEPT,
|
||||
.retval = 1,
|
||||
},
|
Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше
Загрузка…
Ссылка в новой задаче