bpf-next-for-netdev
-----BEGIN PGP SIGNATURE----- iHUEABYIAB0WIQTFp0I1jqZrAX+hPRXbK58LschIgwUCZAZsBwAKCRDbK58LschI g3W1AQCQnO6pqqX5Q2aYDAZPlZRtV2TRLjuqrQE0dHW/XLAbBgD/bgsAmiKhPSCG 2mTt6izpTQVlZB0e8KcDIvbYd9CE3Qc= =EjJQ -----END PGP SIGNATURE----- Merge tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next Daniel Borkmann says: ==================== pull-request: bpf-next 2023-03-06 We've added 85 non-merge commits during the last 13 day(s) which contain a total of 131 files changed, 7102 insertions(+), 1792 deletions(-). The main changes are: 1) Add skb and XDP typed dynptrs which allow BPF programs for more ergonomic and less brittle iteration through data and variable-sized accesses, from Joanne Koong. 2) Bigger batch of BPF verifier improvements to prepare for upcoming BPF open-coded iterators allowing for less restrictive looping capabilities, from Andrii Nakryiko. 3) Rework RCU enforcement in the verifier, add kptr_rcu and enforce BPF programs to NULL-check before passing such pointers into kfunc, from Alexei Starovoitov. 4) Add support for kptrs in percpu hashmaps, percpu LRU hashmaps and in local storage maps, from Kumar Kartikeya Dwivedi. 5) Add BPF verifier support for ST instructions in convert_ctx_access() which will help new -mcpu=v4 clang flag to start emitting them, from Eduard Zingerman. 6) Make uprobe attachment Android APK aware by supporting attachment to functions inside ELF objects contained in APKs via function names, from Daniel Müller. 7) Add a new flag BPF_F_TIMER_ABS flag for bpf_timer_start() helper to start the timer with absolute expiration value instead of relative one, from Tero Kristo. 8) Add a new kfunc bpf_cgroup_from_id() to look up cgroups via id, from Tejun Heo. 9) Extend libbpf to support users manually attaching kprobes/uprobes in the legacy/perf/link mode, from Menglong Dong. 10) Implement workarounds in the mips BPF JIT for DADDI/R4000, from Jiaxun Yang. 11) Enable mixing bpf2bpf and tailcalls for the loongarch BPF JIT, from Hengqi Chen. 12) Extend BPF instruction set doc with describing the encoding of BPF instructions in terms of how bytes are stored under big/little endian, from Jose E. Marchesi. 13) Follow-up to enable kfunc support for riscv BPF JIT, from Pu Lehui. 14) Fix bpf_xdp_query() backwards compatibility on old kernels, from Yonghong Song. 15) Fix BPF selftest cross compilation with CLANG_CROSS_FLAGS, from Florent Revest. 16) Improve bpf_cpumask_ma to only allocate one bpf_mem_cache, from Hou Tao. 17) Fix BPF verifier's check_subprogs to not unnecessarily mark a subprogram with has_tail_call, from Ilya Leoshkevich. 18) Fix arm syscall regs spec in libbpf's bpf_tracing.h, from Puranjay Mohan. * tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (85 commits) selftests/bpf: Add test for legacy/perf kprobe/uprobe attach mode selftests/bpf: Split test_attach_probe into multi subtests libbpf: Add support to set kprobe/uprobe attach mode tools/resolve_btfids: Add /libsubcmd to .gitignore bpf: add support for fixed-size memory pointer returns for kfuncs bpf: generalize dynptr_get_spi to be usable for iters bpf: mark PTR_TO_MEM as non-null register type bpf: move kfunc_call_arg_meta higher in the file bpf: ensure that r0 is marked scratched after any function call bpf: fix visit_insn()'s detection of BPF_FUNC_timer_set_callback helper bpf: clean up visit_insn()'s instruction processing selftests/bpf: adjust log_fixup's buffer size for proper truncation bpf: honor env->test_state_freq flag in is_state_visited() selftests/bpf: enhance align selftest's expected log matching bpf: improve regsafe() checks for PTR_TO_{MEM,BUF,TP_BUFFER} bpf: improve stack slot state printing selftests/bpf: Disassembler tests for verifier.c:convert_ctx_access() selftests/bpf: test if pointer type is tracked for BPF_ST_MEM bpf: allow ctx writes using BPF_ST_MEM instruction bpf: Use separate RCU callbacks for freeing selem ... ==================== Link: https://lore.kernel.org/r/20230307004346.27578-1-daniel@iogearbox.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Коммит
36e5e391a2
|
@ -314,7 +314,7 @@ Q: What is the compatibility story for special BPF types in map values?
|
|||
Q: Users are allowed to embed bpf_spin_lock, bpf_timer fields in their BPF map
|
||||
values (when using BTF support for BPF maps). This allows to use helpers for
|
||||
such objects on these fields inside map values. Users are also allowed to embed
|
||||
pointers to some kernel types (with __kptr and __kptr_ref BTF tags). Will the
|
||||
pointers to some kernel types (with __kptr_untrusted and __kptr BTF tags). Will the
|
||||
kernel preserve backwards compatibility for these features?
|
||||
|
||||
A: It depends. For bpf_spin_lock, bpf_timer: YES, for kptr and everything else:
|
||||
|
@ -324,7 +324,7 @@ For struct types that have been added already, like bpf_spin_lock and bpf_timer,
|
|||
the kernel will preserve backwards compatibility, as they are part of UAPI.
|
||||
|
||||
For kptrs, they are also part of UAPI, but only with respect to the kptr
|
||||
mechanism. The types that you can use with a __kptr and __kptr_ref tagged
|
||||
mechanism. The types that you can use with a __kptr_untrusted and __kptr tagged
|
||||
pointer in your struct are NOT part of the UAPI contract. The supported types can
|
||||
and will change across kernel releases. However, operations like accessing kptr
|
||||
fields and bpf_kptr_xchg() helper will continue to be supported across kernel
|
||||
|
|
|
@ -128,7 +128,7 @@ into the bpf-next tree will make their way into net-next tree. net and
|
|||
net-next are both run by David S. Miller. From there, they will go
|
||||
into the kernel mainline tree run by Linus Torvalds. To read up on the
|
||||
process of net and net-next being merged into the mainline tree, see
|
||||
the :ref:`netdev-FAQ`
|
||||
the `netdev-FAQ`_.
|
||||
|
||||
|
||||
|
||||
|
@ -147,7 +147,7 @@ request)::
|
|||
Q: How do I indicate which tree (bpf vs. bpf-next) my patch should be applied to?
|
||||
---------------------------------------------------------------------------------
|
||||
|
||||
A: The process is the very same as described in the :ref:`netdev-FAQ`,
|
||||
A: The process is the very same as described in the `netdev-FAQ`_,
|
||||
so please read up on it. The subject line must indicate whether the
|
||||
patch is a fix or rather "next-like" content in order to let the
|
||||
maintainers know whether it is targeted at bpf or bpf-next.
|
||||
|
@ -206,7 +206,7 @@ ii) run extensive BPF test suite and
|
|||
Once the BPF pull request was accepted by David S. Miller, then
|
||||
the patches end up in net or net-next tree, respectively, and
|
||||
make their way from there further into mainline. Again, see the
|
||||
:ref:`netdev-FAQ` for additional information e.g. on how often they are
|
||||
`netdev-FAQ`_ for additional information e.g. on how often they are
|
||||
merged to mainline.
|
||||
|
||||
Q: How long do I need to wait for feedback on my BPF patches?
|
||||
|
@ -230,7 +230,7 @@ Q: Are patches applied to bpf-next when the merge window is open?
|
|||
-----------------------------------------------------------------
|
||||
A: For the time when the merge window is open, bpf-next will not be
|
||||
processed. This is roughly analogous to net-next patch processing,
|
||||
so feel free to read up on the :ref:`netdev-FAQ` about further details.
|
||||
so feel free to read up on the `netdev-FAQ`_ about further details.
|
||||
|
||||
During those two weeks of merge window, we might ask you to resend
|
||||
your patch series once bpf-next is open again. Once Linus released
|
||||
|
@ -394,7 +394,7 @@ netdev kernel mailing list in Cc and ask for the fix to be queued up:
|
|||
netdev@vger.kernel.org
|
||||
|
||||
The process in general is the same as on netdev itself, see also the
|
||||
:ref:`netdev-FAQ`.
|
||||
`netdev-FAQ`_.
|
||||
|
||||
Q: Do you also backport to kernels not currently maintained as stable?
|
||||
----------------------------------------------------------------------
|
||||
|
@ -410,7 +410,7 @@ Q: The BPF patch I am about to submit needs to go to stable as well
|
|||
What should I do?
|
||||
|
||||
A: The same rules apply as with netdev patch submissions in general, see
|
||||
the :ref:`netdev-FAQ`.
|
||||
the `netdev-FAQ`_.
|
||||
|
||||
Never add "``Cc: stable@vger.kernel.org``" to the patch description, but
|
||||
ask the BPF maintainers to queue the patches instead. This can be done
|
||||
|
@ -685,7 +685,7 @@ when:
|
|||
|
||||
.. Links
|
||||
.. _Documentation/process/: https://www.kernel.org/doc/html/latest/process/
|
||||
.. _netdev-FAQ: Documentation/process/maintainer-netdev.rst
|
||||
.. _netdev-FAQ: https://www.kernel.org/doc/html/latest/process/maintainer-netdev.html
|
||||
.. _selftests:
|
||||
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tools/testing/selftests/bpf/
|
||||
.. _Documentation/dev-tools/kselftest.rst:
|
||||
|
|
|
@ -51,7 +51,7 @@ For example:
|
|||
.. code-block:: c
|
||||
|
||||
struct cpumask_map_value {
|
||||
struct bpf_cpumask __kptr_ref * cpumask;
|
||||
struct bpf_cpumask __kptr * cpumask;
|
||||
};
|
||||
|
||||
struct array_map {
|
||||
|
@ -128,7 +128,7 @@ Here is an example of a ``struct bpf_cpumask *`` being retrieved from a map:
|
|||
|
||||
/* struct containing the struct bpf_cpumask kptr which is stored in the map. */
|
||||
struct cpumasks_kfunc_map_value {
|
||||
struct bpf_cpumask __kptr_ref * bpf_cpumask;
|
||||
struct bpf_cpumask __kptr * bpf_cpumask;
|
||||
};
|
||||
|
||||
/* The map containing struct cpumasks_kfunc_map_value entries. */
|
||||
|
|
|
@ -38,14 +38,11 @@ eBPF has two instruction encodings:
|
|||
* the wide instruction encoding, which appends a second 64-bit immediate (i.e.,
|
||||
constant) value after the basic instruction for a total of 128 bits.
|
||||
|
||||
The basic instruction encoding is as follows, where MSB and LSB mean the most significant
|
||||
bits and least significant bits, respectively:
|
||||
The fields conforming an encoded basic instruction are stored in the
|
||||
following order::
|
||||
|
||||
============= ======= ======= ======= ============
|
||||
32 bits (MSB) 16 bits 4 bits 4 bits 8 bits (LSB)
|
||||
============= ======= ======= ======= ============
|
||||
imm offset src_reg dst_reg opcode
|
||||
============= ======= ======= ======= ============
|
||||
opcode:8 src_reg:4 dst_reg:4 offset:16 imm:32 // In little-endian BPF.
|
||||
opcode:8 dst_reg:4 src_reg:4 offset:16 imm:32 // In big-endian BPF.
|
||||
|
||||
**imm**
|
||||
signed integer immediate value
|
||||
|
@ -63,6 +60,18 @@ imm offset src_reg dst_reg opcode
|
|||
**opcode**
|
||||
operation to perform
|
||||
|
||||
Note that the contents of multi-byte fields ('imm' and 'offset') are
|
||||
stored using big-endian byte ordering in big-endian BPF and
|
||||
little-endian byte ordering in little-endian BPF.
|
||||
|
||||
For example::
|
||||
|
||||
opcode offset imm assembly
|
||||
src_reg dst_reg
|
||||
07 0 1 00 00 44 33 22 11 r1 += 0x11223344 // little
|
||||
dst_reg src_reg
|
||||
07 1 0 00 00 11 22 33 44 r1 += 0x11223344 // big
|
||||
|
||||
Note that most instructions do not use all of the fields.
|
||||
Unused fields shall be cleared to zero.
|
||||
|
||||
|
@ -72,18 +81,23 @@ The 64 bits following the basic instruction contain a pseudo instruction
|
|||
using the same format but with opcode, dst_reg, src_reg, and offset all set to zero,
|
||||
and imm containing the high 32 bits of the immediate value.
|
||||
|
||||
================= ==================
|
||||
64 bits (MSB) 64 bits (LSB)
|
||||
================= ==================
|
||||
basic instruction pseudo instruction
|
||||
================= ==================
|
||||
This is depicted in the following figure::
|
||||
|
||||
basic_instruction
|
||||
.-----------------------------.
|
||||
| |
|
||||
code:8 regs:8 offset:16 imm:32 unused:32 imm:32
|
||||
| |
|
||||
'--------------'
|
||||
pseudo instruction
|
||||
|
||||
Thus the 64-bit immediate value is constructed as follows:
|
||||
|
||||
imm64 = (next_imm << 32) | imm
|
||||
|
||||
where 'next_imm' refers to the imm value of the pseudo instruction
|
||||
following the basic instruction.
|
||||
following the basic instruction. The unused bytes in the pseudo
|
||||
instruction are reserved and shall be cleared to zero.
|
||||
|
||||
Instruction classes
|
||||
-------------------
|
||||
|
|
|
@ -100,6 +100,23 @@ Hence, whenever a constant scalar argument is accepted by a kfunc which is not a
|
|||
size parameter, and the value of the constant matters for program safety, __k
|
||||
suffix should be used.
|
||||
|
||||
2.2.2 __uninit Annotation
|
||||
-------------------------
|
||||
|
||||
This annotation is used to indicate that the argument will be treated as
|
||||
uninitialized.
|
||||
|
||||
An example is given below::
|
||||
|
||||
__bpf_kfunc int bpf_dynptr_from_skb(..., struct bpf_dynptr_kern *ptr__uninit)
|
||||
{
|
||||
...
|
||||
}
|
||||
|
||||
Here, the dynptr will be treated as an uninitialized dynptr. Without this
|
||||
annotation, the verifier will reject the program if the dynptr passed in is
|
||||
not initialized.
|
||||
|
||||
.. _BPF_kfunc_nodef:
|
||||
|
||||
2.3 Using an existing kernel function
|
||||
|
@ -232,11 +249,13 @@ added later.
|
|||
2.4.8 KF_RCU flag
|
||||
-----------------
|
||||
|
||||
The KF_RCU flag is used for kfuncs which have a rcu ptr as its argument.
|
||||
When used together with KF_ACQUIRE, it indicates the kfunc should have a
|
||||
single argument which must be a trusted argument or a MEM_RCU pointer.
|
||||
The argument may have reference count of 0 and the kfunc must take this
|
||||
into consideration.
|
||||
The KF_RCU flag is a weaker version of KF_TRUSTED_ARGS. The kfuncs marked with
|
||||
KF_RCU expect either PTR_TRUSTED or MEM_RCU arguments. The verifier guarantees
|
||||
that the objects are valid and there is no use-after-free. The pointers are not
|
||||
NULL, but the object's refcount could have reached zero. The kfuncs need to
|
||||
consider doing refcnt != 0 check, especially when returning a KF_ACQUIRE
|
||||
pointer. Note as well that a KF_ACQUIRE kfunc that is KF_RCU should very likely
|
||||
also be KF_RET_NULL.
|
||||
|
||||
.. _KF_deprecated_flag:
|
||||
|
||||
|
@ -527,7 +546,7 @@ Here's an example of how it can be used:
|
|||
|
||||
/* struct containing the struct task_struct kptr which is actually stored in the map. */
|
||||
struct __cgroups_kfunc_map_value {
|
||||
struct cgroup __kptr_ref * cgroup;
|
||||
struct cgroup __kptr * cgroup;
|
||||
};
|
||||
|
||||
/* The map containing struct __cgroups_kfunc_map_value entries. */
|
||||
|
@ -583,13 +602,17 @@ Here's an example of how it can be used:
|
|||
|
||||
----
|
||||
|
||||
Another kfunc available for interacting with ``struct cgroup *`` objects is
|
||||
bpf_cgroup_ancestor(). This allows callers to access the ancestor of a cgroup,
|
||||
and return it as a cgroup kptr.
|
||||
Other kfuncs available for interacting with ``struct cgroup *`` objects are
|
||||
bpf_cgroup_ancestor() and bpf_cgroup_from_id(), allowing callers to access
|
||||
the ancestor of a cgroup and find a cgroup by its ID, respectively. Both
|
||||
return a cgroup kptr.
|
||||
|
||||
.. kernel-doc:: kernel/bpf/helpers.c
|
||||
:identifiers: bpf_cgroup_ancestor
|
||||
|
||||
.. kernel-doc:: kernel/bpf/helpers.c
|
||||
:identifiers: bpf_cgroup_from_id
|
||||
|
||||
Eventually, BPF should be updated to allow this to happen with a normal memory
|
||||
load in the program itself. This is currently not possible without more work in
|
||||
the verifier. bpf_cgroup_ancestor() can be used as follows:
|
||||
|
|
|
@ -11,9 +11,9 @@ maps are accessed from BPF programs via BPF helpers which are documented in the
|
|||
`man-pages`_ for `bpf-helpers(7)`_.
|
||||
|
||||
BPF maps are accessed from user space via the ``bpf`` syscall, which provides
|
||||
commands to create maps, lookup elements, update elements and delete
|
||||
elements. More details of the BPF syscall are available in
|
||||
:doc:`/userspace-api/ebpf/syscall` and in the `man-pages`_ for `bpf(2)`_.
|
||||
commands to create maps, lookup elements, update elements and delete elements.
|
||||
More details of the BPF syscall are available in `ebpf-syscall`_ and in the
|
||||
`man-pages`_ for `bpf(2)`_.
|
||||
|
||||
Map Types
|
||||
=========
|
||||
|
@ -79,3 +79,4 @@ Find and delete element by key in a given map using ``attr->map_fd``,
|
|||
.. _man-pages: https://www.kernel.org/doc/man-pages/
|
||||
.. _bpf(2): https://man7.org/linux/man-pages/man2/bpf.2.html
|
||||
.. _bpf-helpers(7): https://man7.org/linux/man-pages/man7/bpf-helpers.7.html
|
||||
.. _ebpf-syscall: https://docs.kernel.org/userspace-api/ebpf/syscall.html
|
||||
|
|
|
@ -1248,3 +1248,9 @@ out:
|
|||
|
||||
return prog;
|
||||
}
|
||||
|
||||
/* Indicate the JIT backend supports mixing bpf2bpf and tailcalls. */
|
||||
bool bpf_jit_supports_subprog_tailcalls(void)
|
||||
{
|
||||
return true;
|
||||
}
|
||||
|
|
|
@ -63,10 +63,7 @@ config MIPS
|
|||
select HAVE_DEBUG_STACKOVERFLOW
|
||||
select HAVE_DMA_CONTIGUOUS
|
||||
select HAVE_DYNAMIC_FTRACE
|
||||
select HAVE_EBPF_JIT if !CPU_MICROMIPS && \
|
||||
!CPU_DADDI_WORKAROUNDS && \
|
||||
!CPU_R4000_WORKAROUNDS && \
|
||||
!CPU_R4400_WORKAROUNDS
|
||||
select HAVE_EBPF_JIT if !CPU_MICROMIPS
|
||||
select HAVE_EXIT_THREAD
|
||||
select HAVE_FAST_GUP
|
||||
select HAVE_FTRACE_MCOUNT_RECORD
|
||||
|
|
|
@ -218,9 +218,13 @@ bool valid_alu_i(u8 op, s32 imm)
|
|||
/* All legal eBPF values are valid */
|
||||
return true;
|
||||
case BPF_ADD:
|
||||
if (IS_ENABLED(CONFIG_CPU_DADDI_WORKAROUNDS))
|
||||
return false;
|
||||
/* imm must be 16 bits */
|
||||
return imm >= -0x8000 && imm <= 0x7fff;
|
||||
case BPF_SUB:
|
||||
if (IS_ENABLED(CONFIG_CPU_DADDI_WORKAROUNDS))
|
||||
return false;
|
||||
/* -imm must be 16 bits */
|
||||
return imm >= -0x7fff && imm <= 0x8000;
|
||||
case BPF_AND:
|
||||
|
|
|
@ -228,6 +228,9 @@ static void emit_alu_r64(struct jit_context *ctx, u8 dst, u8 src, u8 op)
|
|||
} else {
|
||||
emit(ctx, dmultu, dst, src);
|
||||
emit(ctx, mflo, dst);
|
||||
/* Ensure multiplication is completed */
|
||||
if (IS_ENABLED(CONFIG_CPU_R4000_WORKAROUNDS))
|
||||
emit(ctx, mfhi, MIPS_R_ZERO);
|
||||
}
|
||||
break;
|
||||
/* dst = dst / src */
|
||||
|
|
|
@ -1751,3 +1751,8 @@ void bpf_jit_build_epilogue(struct rv_jit_context *ctx)
|
|||
{
|
||||
__build_epilogue(false, ctx);
|
||||
}
|
||||
|
||||
bool bpf_jit_supports_kfunc_call(void)
|
||||
{
|
||||
return true;
|
||||
}
|
||||
|
|
|
@ -607,11 +607,18 @@ enum bpf_type_flag {
|
|||
*/
|
||||
NON_OWN_REF = BIT(14 + BPF_BASE_TYPE_BITS),
|
||||
|
||||
/* DYNPTR points to sk_buff */
|
||||
DYNPTR_TYPE_SKB = BIT(15 + BPF_BASE_TYPE_BITS),
|
||||
|
||||
/* DYNPTR points to xdp_buff */
|
||||
DYNPTR_TYPE_XDP = BIT(16 + BPF_BASE_TYPE_BITS),
|
||||
|
||||
__BPF_TYPE_FLAG_MAX,
|
||||
__BPF_TYPE_LAST_FLAG = __BPF_TYPE_FLAG_MAX - 1,
|
||||
};
|
||||
|
||||
#define DYNPTR_TYPE_FLAG_MASK (DYNPTR_TYPE_LOCAL | DYNPTR_TYPE_RINGBUF)
|
||||
#define DYNPTR_TYPE_FLAG_MASK (DYNPTR_TYPE_LOCAL | DYNPTR_TYPE_RINGBUF | DYNPTR_TYPE_SKB \
|
||||
| DYNPTR_TYPE_XDP)
|
||||
|
||||
/* Max number of base types. */
|
||||
#define BPF_BASE_TYPE_LIMIT (1UL << BPF_BASE_TYPE_BITS)
|
||||
|
@ -1124,6 +1131,37 @@ static __always_inline __nocfi unsigned int bpf_dispatcher_nop_func(
|
|||
return bpf_func(ctx, insnsi);
|
||||
}
|
||||
|
||||
/* the implementation of the opaque uapi struct bpf_dynptr */
|
||||
struct bpf_dynptr_kern {
|
||||
void *data;
|
||||
/* Size represents the number of usable bytes of dynptr data.
|
||||
* If for example the offset is at 4 for a local dynptr whose data is
|
||||
* of type u64, the number of usable bytes is 4.
|
||||
*
|
||||
* The upper 8 bits are reserved. It is as follows:
|
||||
* Bits 0 - 23 = size
|
||||
* Bits 24 - 30 = dynptr type
|
||||
* Bit 31 = whether dynptr is read-only
|
||||
*/
|
||||
u32 size;
|
||||
u32 offset;
|
||||
} __aligned(8);
|
||||
|
||||
enum bpf_dynptr_type {
|
||||
BPF_DYNPTR_TYPE_INVALID,
|
||||
/* Points to memory that is local to the bpf program */
|
||||
BPF_DYNPTR_TYPE_LOCAL,
|
||||
/* Underlying data is a ringbuf record */
|
||||
BPF_DYNPTR_TYPE_RINGBUF,
|
||||
/* Underlying data is a sk_buff */
|
||||
BPF_DYNPTR_TYPE_SKB,
|
||||
/* Underlying data is a xdp_buff */
|
||||
BPF_DYNPTR_TYPE_XDP,
|
||||
};
|
||||
|
||||
int bpf_dynptr_check_size(u32 size);
|
||||
u32 bpf_dynptr_get_size(const struct bpf_dynptr_kern *ptr);
|
||||
|
||||
#ifdef CONFIG_BPF_JIT
|
||||
int bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr);
|
||||
int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr);
|
||||
|
@ -2241,7 +2279,7 @@ struct bpf_core_ctx {
|
|||
|
||||
bool btf_nested_type_is_trusted(struct bpf_verifier_log *log,
|
||||
const struct bpf_reg_state *reg,
|
||||
int off);
|
||||
int off, const char *suffix);
|
||||
|
||||
bool btf_type_ids_nocast_alias(struct bpf_verifier_log *log,
|
||||
const struct btf *reg_btf, u32 reg_id,
|
||||
|
@ -2266,6 +2304,11 @@ static inline bool has_current_bpf_ctx(void)
|
|||
}
|
||||
|
||||
void notrace bpf_prog_inc_misses_counter(struct bpf_prog *prog);
|
||||
|
||||
void bpf_dynptr_init(struct bpf_dynptr_kern *ptr, void *data,
|
||||
enum bpf_dynptr_type type, u32 offset, u32 size);
|
||||
void bpf_dynptr_set_null(struct bpf_dynptr_kern *ptr);
|
||||
void bpf_dynptr_set_rdonly(struct bpf_dynptr_kern *ptr);
|
||||
#else /* !CONFIG_BPF_SYSCALL */
|
||||
static inline struct bpf_prog *bpf_prog_get(u32 ufd)
|
||||
{
|
||||
|
@ -2495,6 +2538,19 @@ static inline void bpf_prog_inc_misses_counter(struct bpf_prog *prog)
|
|||
static inline void bpf_cgrp_storage_free(struct cgroup *cgroup)
|
||||
{
|
||||
}
|
||||
|
||||
static inline void bpf_dynptr_init(struct bpf_dynptr_kern *ptr, void *data,
|
||||
enum bpf_dynptr_type type, u32 offset, u32 size)
|
||||
{
|
||||
}
|
||||
|
||||
static inline void bpf_dynptr_set_null(struct bpf_dynptr_kern *ptr)
|
||||
{
|
||||
}
|
||||
|
||||
static inline void bpf_dynptr_set_rdonly(struct bpf_dynptr_kern *ptr)
|
||||
{
|
||||
}
|
||||
#endif /* CONFIG_BPF_SYSCALL */
|
||||
|
||||
void __bpf_free_used_btfs(struct bpf_prog_aux *aux,
|
||||
|
@ -2801,6 +2857,8 @@ u32 bpf_sock_convert_ctx_access(enum bpf_access_type type,
|
|||
struct bpf_insn *insn_buf,
|
||||
struct bpf_prog *prog,
|
||||
u32 *target_size);
|
||||
int bpf_dynptr_from_skb_rdonly(struct sk_buff *skb, u64 flags,
|
||||
struct bpf_dynptr_kern *ptr);
|
||||
#else
|
||||
static inline bool bpf_sock_common_is_valid_access(int off, int size,
|
||||
enum bpf_access_type type,
|
||||
|
@ -2822,6 +2880,11 @@ static inline u32 bpf_sock_convert_ctx_access(enum bpf_access_type type,
|
|||
{
|
||||
return 0;
|
||||
}
|
||||
static inline int bpf_dynptr_from_skb_rdonly(struct sk_buff *skb, u64 flags,
|
||||
struct bpf_dynptr_kern *ptr)
|
||||
{
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_INET
|
||||
|
@ -2913,36 +2976,6 @@ int bpf_bprintf_prepare(char *fmt, u32 fmt_size, const u64 *raw_args,
|
|||
u32 num_args, struct bpf_bprintf_data *data);
|
||||
void bpf_bprintf_cleanup(struct bpf_bprintf_data *data);
|
||||
|
||||
/* the implementation of the opaque uapi struct bpf_dynptr */
|
||||
struct bpf_dynptr_kern {
|
||||
void *data;
|
||||
/* Size represents the number of usable bytes of dynptr data.
|
||||
* If for example the offset is at 4 for a local dynptr whose data is
|
||||
* of type u64, the number of usable bytes is 4.
|
||||
*
|
||||
* The upper 8 bits are reserved. It is as follows:
|
||||
* Bits 0 - 23 = size
|
||||
* Bits 24 - 30 = dynptr type
|
||||
* Bit 31 = whether dynptr is read-only
|
||||
*/
|
||||
u32 size;
|
||||
u32 offset;
|
||||
} __aligned(8);
|
||||
|
||||
enum bpf_dynptr_type {
|
||||
BPF_DYNPTR_TYPE_INVALID,
|
||||
/* Points to memory that is local to the bpf program */
|
||||
BPF_DYNPTR_TYPE_LOCAL,
|
||||
/* Underlying data is a kernel-produced ringbuf record */
|
||||
BPF_DYNPTR_TYPE_RINGBUF,
|
||||
};
|
||||
|
||||
void bpf_dynptr_init(struct bpf_dynptr_kern *ptr, void *data,
|
||||
enum bpf_dynptr_type type, u32 offset, u32 size);
|
||||
void bpf_dynptr_set_null(struct bpf_dynptr_kern *ptr);
|
||||
int bpf_dynptr_check_size(u32 size);
|
||||
u32 bpf_dynptr_get_size(const struct bpf_dynptr_kern *ptr);
|
||||
|
||||
#ifdef CONFIG_BPF_LSM
|
||||
void bpf_cgroup_atype_get(u32 attach_btf_id, int cgroup_atype);
|
||||
void bpf_cgroup_atype_put(int cgroup_atype);
|
||||
|
|
|
@ -14,6 +14,13 @@ struct bpf_mem_alloc {
|
|||
struct work_struct work;
|
||||
};
|
||||
|
||||
/* 'size != 0' is for bpf_mem_alloc which manages fixed-size objects.
|
||||
* Alloc and free are done with bpf_mem_cache_{alloc,free}().
|
||||
*
|
||||
* 'size = 0' is for bpf_mem_alloc which manages many fixed-size objects.
|
||||
* Alloc and free are done with bpf_mem_{alloc,free}() and the size of
|
||||
* the returned object is given by the size argument of bpf_mem_alloc().
|
||||
*/
|
||||
int bpf_mem_alloc_init(struct bpf_mem_alloc *ma, int size, bool percpu);
|
||||
void bpf_mem_alloc_destroy(struct bpf_mem_alloc *ma);
|
||||
|
||||
|
|
|
@ -537,7 +537,6 @@ struct bpf_verifier_env {
|
|||
bool bypass_spec_v1;
|
||||
bool bypass_spec_v4;
|
||||
bool seen_direct_write;
|
||||
bool rcu_tag_supported;
|
||||
struct bpf_insn_aux_data *insn_aux_data; /* array of per-insn state */
|
||||
const struct bpf_line_info *prev_linfo;
|
||||
struct bpf_verifier_log log;
|
||||
|
@ -616,9 +615,6 @@ int check_func_arg_reg_off(struct bpf_verifier_env *env,
|
|||
enum bpf_arg_type arg_type);
|
||||
int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
|
||||
u32 regno, u32 mem_size);
|
||||
struct bpf_call_arg_meta;
|
||||
int process_dynptr_func(struct bpf_verifier_env *env, int regno,
|
||||
enum bpf_arg_type arg_type, struct bpf_call_arg_meta *meta);
|
||||
|
||||
/* this lives here instead of in bpf.h because it needs to dereference tgt_prog */
|
||||
static inline u64 bpf_trampoline_compute_key(const struct bpf_prog *tgt_prog,
|
||||
|
|
|
@ -70,7 +70,7 @@
|
|||
#define KF_TRUSTED_ARGS (1 << 4) /* kfunc only takes trusted pointer arguments */
|
||||
#define KF_SLEEPABLE (1 << 5) /* kfunc may sleep */
|
||||
#define KF_DESTRUCTIVE (1 << 6) /* kfunc performs destructive actions */
|
||||
#define KF_RCU (1 << 7) /* kfunc only takes rcu pointer arguments */
|
||||
#define KF_RCU (1 << 7) /* kfunc takes either rcu or trusted pointer arguments */
|
||||
|
||||
/*
|
||||
* Tag marking a kernel function as a kfunc. This is meant to minimize the
|
||||
|
|
|
@ -1542,4 +1542,50 @@ static __always_inline int __bpf_xdp_redirect_map(struct bpf_map *map, u64 index
|
|||
return XDP_REDIRECT;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_NET
|
||||
int __bpf_skb_load_bytes(const struct sk_buff *skb, u32 offset, void *to, u32 len);
|
||||
int __bpf_skb_store_bytes(struct sk_buff *skb, u32 offset, const void *from,
|
||||
u32 len, u64 flags);
|
||||
int __bpf_xdp_load_bytes(struct xdp_buff *xdp, u32 offset, void *buf, u32 len);
|
||||
int __bpf_xdp_store_bytes(struct xdp_buff *xdp, u32 offset, void *buf, u32 len);
|
||||
void *bpf_xdp_pointer(struct xdp_buff *xdp, u32 offset, u32 len);
|
||||
void bpf_xdp_copy_buf(struct xdp_buff *xdp, unsigned long off,
|
||||
void *buf, unsigned long len, bool flush);
|
||||
#else /* CONFIG_NET */
|
||||
static inline int __bpf_skb_load_bytes(const struct sk_buff *skb, u32 offset,
|
||||
void *to, u32 len)
|
||||
{
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
static inline int __bpf_skb_store_bytes(struct sk_buff *skb, u32 offset,
|
||||
const void *from, u32 len, u64 flags)
|
||||
{
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
static inline int __bpf_xdp_load_bytes(struct xdp_buff *xdp, u32 offset,
|
||||
void *buf, u32 len)
|
||||
{
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
static inline int __bpf_xdp_store_bytes(struct xdp_buff *xdp, u32 offset,
|
||||
void *buf, u32 len)
|
||||
{
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
static inline void *bpf_xdp_pointer(struct xdp_buff *xdp, u32 offset, u32 len)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static inline void *bpf_xdp_copy_buf(struct xdp_buff *xdp, unsigned long off, void *buf,
|
||||
unsigned long len, bool flush)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
#endif /* CONFIG_NET */
|
||||
|
||||
#endif /* __LINUX_FILTER_H__ */
|
||||
|
|
|
@ -4969,6 +4969,12 @@ union bpf_attr {
|
|||
* different maps if key/value layout matches across maps.
|
||||
* Every bpf_timer_set_callback() can have different callback_fn.
|
||||
*
|
||||
* *flags* can be one of:
|
||||
*
|
||||
* **BPF_F_TIMER_ABS**
|
||||
* Start the timer in absolute expire value instead of the
|
||||
* default relative one.
|
||||
*
|
||||
* Return
|
||||
* 0 on success.
|
||||
* **-EINVAL** if *timer* was not initialized with bpf_timer_init() earlier
|
||||
|
@ -5325,11 +5331,22 @@ union bpf_attr {
|
|||
* Description
|
||||
* Write *len* bytes from *src* into *dst*, starting from *offset*
|
||||
* into *dst*.
|
||||
* *flags* is currently unused.
|
||||
*
|
||||
* *flags* must be 0 except for skb-type dynptrs.
|
||||
*
|
||||
* For skb-type dynptrs:
|
||||
* * All data slices of the dynptr are automatically
|
||||
* invalidated after **bpf_dynptr_write**\ (). This is
|
||||
* because writing may pull the skb and change the
|
||||
* underlying packet buffer.
|
||||
*
|
||||
* * For *flags*, please see the flags accepted by
|
||||
* **bpf_skb_store_bytes**\ ().
|
||||
* Return
|
||||
* 0 on success, -E2BIG if *offset* + *len* exceeds the length
|
||||
* of *dst*'s data, -EINVAL if *dst* is an invalid dynptr or if *dst*
|
||||
* is a read-only dynptr or if *flags* is not 0.
|
||||
* is a read-only dynptr or if *flags* is not correct. For skb-type dynptrs,
|
||||
* other errors correspond to errors returned by **bpf_skb_store_bytes**\ ().
|
||||
*
|
||||
* void *bpf_dynptr_data(const struct bpf_dynptr *ptr, u32 offset, u32 len)
|
||||
* Description
|
||||
|
@ -5337,6 +5354,9 @@ union bpf_attr {
|
|||
*
|
||||
* *len* must be a statically known value. The returned data slice
|
||||
* is invalidated whenever the dynptr is invalidated.
|
||||
*
|
||||
* skb and xdp type dynptrs may not use bpf_dynptr_data. They should
|
||||
* instead use bpf_dynptr_slice and bpf_dynptr_slice_rdwr.
|
||||
* Return
|
||||
* Pointer to the underlying dynptr data, NULL if the dynptr is
|
||||
* read-only, if the dynptr is invalid, or if the offset and length
|
||||
|
@ -7083,4 +7103,13 @@ struct bpf_core_relo {
|
|||
enum bpf_core_relo_kind kind;
|
||||
};
|
||||
|
||||
/*
|
||||
* Flags to control bpf_timer_start() behaviour.
|
||||
* - BPF_F_TIMER_ABS: Timeout passed is absolute time, by default it is
|
||||
* relative to current time.
|
||||
*/
|
||||
enum {
|
||||
BPF_F_TIMER_ABS = (1ULL << 0),
|
||||
};
|
||||
|
||||
#endif /* _UAPI__LINUX_BPF_H__ */
|
||||
|
|
|
@ -51,11 +51,21 @@ owner_storage(struct bpf_local_storage_map *smap, void *owner)
|
|||
return map->ops->map_owner_storage_ptr(owner);
|
||||
}
|
||||
|
||||
static bool selem_linked_to_storage_lockless(const struct bpf_local_storage_elem *selem)
|
||||
{
|
||||
return !hlist_unhashed_lockless(&selem->snode);
|
||||
}
|
||||
|
||||
static bool selem_linked_to_storage(const struct bpf_local_storage_elem *selem)
|
||||
{
|
||||
return !hlist_unhashed(&selem->snode);
|
||||
}
|
||||
|
||||
static bool selem_linked_to_map_lockless(const struct bpf_local_storage_elem *selem)
|
||||
{
|
||||
return !hlist_unhashed_lockless(&selem->map_node);
|
||||
}
|
||||
|
||||
static bool selem_linked_to_map(const struct bpf_local_storage_elem *selem)
|
||||
{
|
||||
return !hlist_unhashed(&selem->map_node);
|
||||
|
@ -75,6 +85,7 @@ bpf_selem_alloc(struct bpf_local_storage_map *smap, void *owner,
|
|||
if (selem) {
|
||||
if (value)
|
||||
copy_map_value(&smap->map, SDATA(selem)->data, value);
|
||||
/* No need to call check_and_init_map_value as memory is zero init */
|
||||
return selem;
|
||||
}
|
||||
|
||||
|
@ -98,7 +109,28 @@ void bpf_local_storage_free_rcu(struct rcu_head *rcu)
|
|||
kfree_rcu(local_storage, rcu);
|
||||
}
|
||||
|
||||
static void bpf_selem_free_rcu(struct rcu_head *rcu)
|
||||
static void bpf_selem_free_fields_rcu(struct rcu_head *rcu)
|
||||
{
|
||||
struct bpf_local_storage_elem *selem;
|
||||
struct bpf_local_storage_map *smap;
|
||||
|
||||
selem = container_of(rcu, struct bpf_local_storage_elem, rcu);
|
||||
/* protected by the rcu_barrier*() */
|
||||
smap = rcu_dereference_protected(SDATA(selem)->smap, true);
|
||||
bpf_obj_free_fields(smap->map.record, SDATA(selem)->data);
|
||||
kfree(selem);
|
||||
}
|
||||
|
||||
static void bpf_selem_free_fields_trace_rcu(struct rcu_head *rcu)
|
||||
{
|
||||
/* Free directly if Tasks Trace RCU GP also implies RCU GP */
|
||||
if (rcu_trace_implies_rcu_gp())
|
||||
bpf_selem_free_fields_rcu(rcu);
|
||||
else
|
||||
call_rcu(rcu, bpf_selem_free_fields_rcu);
|
||||
}
|
||||
|
||||
static void bpf_selem_free_trace_rcu(struct rcu_head *rcu)
|
||||
{
|
||||
struct bpf_local_storage_elem *selem;
|
||||
|
||||
|
@ -119,6 +151,7 @@ static bool bpf_selem_unlink_storage_nolock(struct bpf_local_storage *local_stor
|
|||
{
|
||||
struct bpf_local_storage_map *smap;
|
||||
bool free_local_storage;
|
||||
struct btf_record *rec;
|
||||
void *owner;
|
||||
|
||||
smap = rcu_dereference_check(SDATA(selem)->smap, bpf_rcu_lock_held());
|
||||
|
@ -159,10 +192,26 @@ static bool bpf_selem_unlink_storage_nolock(struct bpf_local_storage *local_stor
|
|||
SDATA(selem))
|
||||
RCU_INIT_POINTER(local_storage->cache[smap->cache_idx], NULL);
|
||||
|
||||
if (use_trace_rcu)
|
||||
call_rcu_tasks_trace(&selem->rcu, bpf_selem_free_rcu);
|
||||
else
|
||||
kfree_rcu(selem, rcu);
|
||||
/* A different RCU callback is chosen whenever we need to free
|
||||
* additional fields in selem data before freeing selem.
|
||||
* bpf_local_storage_map_free only executes rcu_barrier to wait for RCU
|
||||
* callbacks when it has special fields, hence we can only conditionally
|
||||
* dereference smap, as by this time the map might have already been
|
||||
* freed without waiting for our call_rcu callback if it did not have
|
||||
* any special fields.
|
||||
*/
|
||||
rec = smap->map.record;
|
||||
if (use_trace_rcu) {
|
||||
if (!IS_ERR_OR_NULL(rec))
|
||||
call_rcu_tasks_trace(&selem->rcu, bpf_selem_free_fields_trace_rcu);
|
||||
else
|
||||
call_rcu_tasks_trace(&selem->rcu, bpf_selem_free_trace_rcu);
|
||||
} else {
|
||||
if (!IS_ERR_OR_NULL(rec))
|
||||
call_rcu(&selem->rcu, bpf_selem_free_fields_rcu);
|
||||
else
|
||||
kfree_rcu(selem, rcu);
|
||||
}
|
||||
|
||||
return free_local_storage;
|
||||
}
|
||||
|
@ -174,7 +223,7 @@ static void __bpf_selem_unlink_storage(struct bpf_local_storage_elem *selem,
|
|||
bool free_local_storage = false;
|
||||
unsigned long flags;
|
||||
|
||||
if (unlikely(!selem_linked_to_storage(selem)))
|
||||
if (unlikely(!selem_linked_to_storage_lockless(selem)))
|
||||
/* selem has already been unlinked from sk */
|
||||
return;
|
||||
|
||||
|
@ -208,7 +257,7 @@ void bpf_selem_unlink_map(struct bpf_local_storage_elem *selem)
|
|||
struct bpf_local_storage_map_bucket *b;
|
||||
unsigned long flags;
|
||||
|
||||
if (unlikely(!selem_linked_to_map(selem)))
|
||||
if (unlikely(!selem_linked_to_map_lockless(selem)))
|
||||
/* selem has already be unlinked from smap */
|
||||
return;
|
||||
|
||||
|
@ -420,7 +469,7 @@ bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap,
|
|||
err = check_flags(old_sdata, map_flags);
|
||||
if (err)
|
||||
return ERR_PTR(err);
|
||||
if (old_sdata && selem_linked_to_storage(SELEM(old_sdata))) {
|
||||
if (old_sdata && selem_linked_to_storage_lockless(SELEM(old_sdata))) {
|
||||
copy_map_value_locked(&smap->map, old_sdata->data,
|
||||
value, false);
|
||||
return old_sdata;
|
||||
|
@ -713,6 +762,26 @@ void bpf_local_storage_map_free(struct bpf_map *map,
|
|||
*/
|
||||
synchronize_rcu();
|
||||
|
||||
/* Only delay freeing of smap, buckets are not needed anymore */
|
||||
kvfree(smap->buckets);
|
||||
|
||||
/* When local storage has special fields, callbacks for
|
||||
* bpf_selem_free_fields_rcu and bpf_selem_free_fields_trace_rcu will
|
||||
* keep using the map BTF record, we need to execute an RCU barrier to
|
||||
* wait for them as the record will be freed right after our map_free
|
||||
* callback.
|
||||
*/
|
||||
if (!IS_ERR_OR_NULL(smap->map.record)) {
|
||||
rcu_barrier_tasks_trace();
|
||||
/* We cannot skip rcu_barrier() when rcu_trace_implies_rcu_gp()
|
||||
* is true, because while call_rcu invocation is skipped in that
|
||||
* case in bpf_selem_free_fields_trace_rcu (and all local
|
||||
* storage maps pass use_trace_rcu = true), there can be
|
||||
* call_rcu callbacks based on use_trace_rcu = false in the
|
||||
* while ((selem = ...)) loop above or when owner's free path
|
||||
* calls bpf_local_storage_unlink_nolock.
|
||||
*/
|
||||
rcu_barrier();
|
||||
}
|
||||
bpf_map_area_free(smap);
|
||||
}
|
||||
|
|
|
@ -207,6 +207,11 @@ enum btf_kfunc_hook {
|
|||
BTF_KFUNC_HOOK_TRACING,
|
||||
BTF_KFUNC_HOOK_SYSCALL,
|
||||
BTF_KFUNC_HOOK_FMODRET,
|
||||
BTF_KFUNC_HOOK_CGROUP_SKB,
|
||||
BTF_KFUNC_HOOK_SCHED_ACT,
|
||||
BTF_KFUNC_HOOK_SK_SKB,
|
||||
BTF_KFUNC_HOOK_SOCKET_FILTER,
|
||||
BTF_KFUNC_HOOK_LWT,
|
||||
BTF_KFUNC_HOOK_MAX,
|
||||
};
|
||||
|
||||
|
@ -3283,9 +3288,9 @@ static int btf_find_kptr(const struct btf *btf, const struct btf_type *t,
|
|||
/* Reject extra tags */
|
||||
if (btf_type_is_type_tag(btf_type_by_id(btf, t->type)))
|
||||
return -EINVAL;
|
||||
if (!strcmp("kptr", __btf_name_by_offset(btf, t->name_off)))
|
||||
if (!strcmp("kptr_untrusted", __btf_name_by_offset(btf, t->name_off)))
|
||||
type = BPF_KPTR_UNREF;
|
||||
else if (!strcmp("kptr_ref", __btf_name_by_offset(btf, t->name_off)))
|
||||
else if (!strcmp("kptr", __btf_name_by_offset(btf, t->name_off)))
|
||||
type = BPF_KPTR_REF;
|
||||
else
|
||||
return -EINVAL;
|
||||
|
@ -5683,6 +5688,10 @@ again:
|
|||
* int socket_filter_bpf_prog(struct __sk_buff *skb)
|
||||
* { // no fields of skb are ever used }
|
||||
*/
|
||||
if (strcmp(ctx_tname, "__sk_buff") == 0 && strcmp(tname, "sk_buff") == 0)
|
||||
return ctx_type;
|
||||
if (strcmp(ctx_tname, "xdp_md") == 0 && strcmp(tname, "xdp_buff") == 0)
|
||||
return ctx_type;
|
||||
if (strcmp(ctx_tname, tname)) {
|
||||
/* bpf_user_pt_regs_t is a typedef, so resolve it to
|
||||
* underlying struct and check name again
|
||||
|
@ -6154,6 +6163,7 @@ static int btf_struct_walk(struct bpf_verifier_log *log, const struct btf *btf,
|
|||
const char *tname, *mname, *tag_value;
|
||||
u32 vlen, elem_id, mid;
|
||||
|
||||
*flag = 0;
|
||||
again:
|
||||
tname = __btf_name_by_offset(btf, t->name_off);
|
||||
if (!btf_type_is_struct(t)) {
|
||||
|
@ -6320,6 +6330,15 @@ error:
|
|||
* of this field or inside of this struct
|
||||
*/
|
||||
if (btf_type_is_struct(mtype)) {
|
||||
if (BTF_INFO_KIND(mtype->info) == BTF_KIND_UNION &&
|
||||
btf_type_vlen(mtype) != 1)
|
||||
/*
|
||||
* walking unions yields untrusted pointers
|
||||
* with exception of __bpf_md_ptr and other
|
||||
* unions with a single member
|
||||
*/
|
||||
*flag |= PTR_UNTRUSTED;
|
||||
|
||||
/* our field must be inside that union or struct */
|
||||
t = mtype;
|
||||
|
||||
|
@ -6364,7 +6383,7 @@ error:
|
|||
stype = btf_type_skip_modifiers(btf, mtype->type, &id);
|
||||
if (btf_type_is_struct(stype)) {
|
||||
*next_btf_id = id;
|
||||
*flag = tmp_flag;
|
||||
*flag |= tmp_flag;
|
||||
return WALK_PTR;
|
||||
}
|
||||
}
|
||||
|
@ -7704,6 +7723,19 @@ static int bpf_prog_type_to_kfunc_hook(enum bpf_prog_type prog_type)
|
|||
return BTF_KFUNC_HOOK_TRACING;
|
||||
case BPF_PROG_TYPE_SYSCALL:
|
||||
return BTF_KFUNC_HOOK_SYSCALL;
|
||||
case BPF_PROG_TYPE_CGROUP_SKB:
|
||||
return BTF_KFUNC_HOOK_CGROUP_SKB;
|
||||
case BPF_PROG_TYPE_SCHED_ACT:
|
||||
return BTF_KFUNC_HOOK_SCHED_ACT;
|
||||
case BPF_PROG_TYPE_SK_SKB:
|
||||
return BTF_KFUNC_HOOK_SK_SKB;
|
||||
case BPF_PROG_TYPE_SOCKET_FILTER:
|
||||
return BTF_KFUNC_HOOK_SOCKET_FILTER;
|
||||
case BPF_PROG_TYPE_LWT_OUT:
|
||||
case BPF_PROG_TYPE_LWT_IN:
|
||||
case BPF_PROG_TYPE_LWT_XMIT:
|
||||
case BPF_PROG_TYPE_LWT_SEG6LOCAL:
|
||||
return BTF_KFUNC_HOOK_LWT;
|
||||
default:
|
||||
return BTF_KFUNC_HOOK_MAX;
|
||||
}
|
||||
|
@ -8335,7 +8367,7 @@ out:
|
|||
|
||||
bool btf_nested_type_is_trusted(struct bpf_verifier_log *log,
|
||||
const struct bpf_reg_state *reg,
|
||||
int off)
|
||||
int off, const char *suffix)
|
||||
{
|
||||
struct btf *btf = reg->btf;
|
||||
const struct btf_type *walk_type, *safe_type;
|
||||
|
@ -8352,7 +8384,7 @@ bool btf_nested_type_is_trusted(struct bpf_verifier_log *log,
|
|||
|
||||
tname = btf_name_by_offset(btf, walk_type->name_off);
|
||||
|
||||
ret = snprintf(safe_tname, sizeof(safe_tname), "%s__safe_fields", tname);
|
||||
ret = snprintf(safe_tname, sizeof(safe_tname), "%s%s", tname, suffix);
|
||||
if (ret < 0)
|
||||
return false;
|
||||
|
||||
|
|
|
@ -2223,10 +2223,12 @@ static u32 sysctl_convert_ctx_access(enum bpf_access_type type,
|
|||
BPF_FIELD_SIZEOF(struct bpf_sysctl_kern, ppos),
|
||||
treg, si->dst_reg,
|
||||
offsetof(struct bpf_sysctl_kern, ppos));
|
||||
*insn++ = BPF_STX_MEM(
|
||||
BPF_SIZEOF(u32), treg, si->src_reg,
|
||||
*insn++ = BPF_RAW_INSN(
|
||||
BPF_CLASS(si->code) | BPF_MEM | BPF_SIZEOF(u32),
|
||||
treg, si->src_reg,
|
||||
bpf_ctx_narrow_access_offset(
|
||||
0, sizeof(u32), sizeof(loff_t)));
|
||||
0, sizeof(u32), sizeof(loff_t)),
|
||||
si->imm);
|
||||
*insn++ = BPF_LDX_MEM(
|
||||
BPF_DW, treg, si->dst_reg,
|
||||
offsetof(struct bpf_sysctl_kern, tmp_reg));
|
||||
|
@ -2376,10 +2378,17 @@ static bool cg_sockopt_is_valid_access(int off, int size,
|
|||
return true;
|
||||
}
|
||||
|
||||
#define CG_SOCKOPT_ACCESS_FIELD(T, F) \
|
||||
T(BPF_FIELD_SIZEOF(struct bpf_sockopt_kern, F), \
|
||||
si->dst_reg, si->src_reg, \
|
||||
offsetof(struct bpf_sockopt_kern, F))
|
||||
#define CG_SOCKOPT_READ_FIELD(F) \
|
||||
BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct bpf_sockopt_kern, F), \
|
||||
si->dst_reg, si->src_reg, \
|
||||
offsetof(struct bpf_sockopt_kern, F))
|
||||
|
||||
#define CG_SOCKOPT_WRITE_FIELD(F) \
|
||||
BPF_RAW_INSN((BPF_FIELD_SIZEOF(struct bpf_sockopt_kern, F) | \
|
||||
BPF_MEM | BPF_CLASS(si->code)), \
|
||||
si->dst_reg, si->src_reg, \
|
||||
offsetof(struct bpf_sockopt_kern, F), \
|
||||
si->imm)
|
||||
|
||||
static u32 cg_sockopt_convert_ctx_access(enum bpf_access_type type,
|
||||
const struct bpf_insn *si,
|
||||
|
@ -2391,25 +2400,25 @@ static u32 cg_sockopt_convert_ctx_access(enum bpf_access_type type,
|
|||
|
||||
switch (si->off) {
|
||||
case offsetof(struct bpf_sockopt, sk):
|
||||
*insn++ = CG_SOCKOPT_ACCESS_FIELD(BPF_LDX_MEM, sk);
|
||||
*insn++ = CG_SOCKOPT_READ_FIELD(sk);
|
||||
break;
|
||||
case offsetof(struct bpf_sockopt, level):
|
||||
if (type == BPF_WRITE)
|
||||
*insn++ = CG_SOCKOPT_ACCESS_FIELD(BPF_STX_MEM, level);
|
||||
*insn++ = CG_SOCKOPT_WRITE_FIELD(level);
|
||||
else
|
||||
*insn++ = CG_SOCKOPT_ACCESS_FIELD(BPF_LDX_MEM, level);
|
||||
*insn++ = CG_SOCKOPT_READ_FIELD(level);
|
||||
break;
|
||||
case offsetof(struct bpf_sockopt, optname):
|
||||
if (type == BPF_WRITE)
|
||||
*insn++ = CG_SOCKOPT_ACCESS_FIELD(BPF_STX_MEM, optname);
|
||||
*insn++ = CG_SOCKOPT_WRITE_FIELD(optname);
|
||||
else
|
||||
*insn++ = CG_SOCKOPT_ACCESS_FIELD(BPF_LDX_MEM, optname);
|
||||
*insn++ = CG_SOCKOPT_READ_FIELD(optname);
|
||||
break;
|
||||
case offsetof(struct bpf_sockopt, optlen):
|
||||
if (type == BPF_WRITE)
|
||||
*insn++ = CG_SOCKOPT_ACCESS_FIELD(BPF_STX_MEM, optlen);
|
||||
*insn++ = CG_SOCKOPT_WRITE_FIELD(optlen);
|
||||
else
|
||||
*insn++ = CG_SOCKOPT_ACCESS_FIELD(BPF_LDX_MEM, optlen);
|
||||
*insn++ = CG_SOCKOPT_READ_FIELD(optlen);
|
||||
break;
|
||||
case offsetof(struct bpf_sockopt, retval):
|
||||
BUILD_BUG_ON(offsetof(struct bpf_cg_run_ctx, run_ctx) != 0);
|
||||
|
@ -2429,9 +2438,11 @@ static u32 cg_sockopt_convert_ctx_access(enum bpf_access_type type,
|
|||
*insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct task_struct, bpf_ctx),
|
||||
treg, treg,
|
||||
offsetof(struct task_struct, bpf_ctx));
|
||||
*insn++ = BPF_STX_MEM(BPF_FIELD_SIZEOF(struct bpf_cg_run_ctx, retval),
|
||||
treg, si->src_reg,
|
||||
offsetof(struct bpf_cg_run_ctx, retval));
|
||||
*insn++ = BPF_RAW_INSN(BPF_CLASS(si->code) | BPF_MEM |
|
||||
BPF_FIELD_SIZEOF(struct bpf_cg_run_ctx, retval),
|
||||
treg, si->src_reg,
|
||||
offsetof(struct bpf_cg_run_ctx, retval),
|
||||
si->imm);
|
||||
*insn++ = BPF_LDX_MEM(BPF_DW, treg, si->dst_reg,
|
||||
offsetof(struct bpf_sockopt_kern, tmp_reg));
|
||||
} else {
|
||||
|
@ -2447,10 +2458,10 @@ static u32 cg_sockopt_convert_ctx_access(enum bpf_access_type type,
|
|||
}
|
||||
break;
|
||||
case offsetof(struct bpf_sockopt, optval):
|
||||
*insn++ = CG_SOCKOPT_ACCESS_FIELD(BPF_LDX_MEM, optval);
|
||||
*insn++ = CG_SOCKOPT_READ_FIELD(optval);
|
||||
break;
|
||||
case offsetof(struct bpf_sockopt, optval_end):
|
||||
*insn++ = CG_SOCKOPT_ACCESS_FIELD(BPF_LDX_MEM, optval_end);
|
||||
*insn++ = CG_SOCKOPT_READ_FIELD(optval_end);
|
||||
break;
|
||||
}
|
||||
|
||||
|
@ -2529,10 +2540,6 @@ cgroup_current_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
|
|||
return &bpf_get_current_pid_tgid_proto;
|
||||
case BPF_FUNC_get_current_comm:
|
||||
return &bpf_get_current_comm_proto;
|
||||
case BPF_FUNC_get_current_cgroup_id:
|
||||
return &bpf_get_current_cgroup_id_proto;
|
||||
case BPF_FUNC_get_current_ancestor_cgroup_id:
|
||||
return &bpf_get_current_ancestor_cgroup_id_proto;
|
||||
#ifdef CONFIG_CGROUP_NET_CLASSID
|
||||
case BPF_FUNC_get_cgroup_classid:
|
||||
return &bpf_get_cgroup_classid_curr_proto;
|
||||
|
|
|
@ -55,7 +55,7 @@ __bpf_kfunc struct bpf_cpumask *bpf_cpumask_create(void)
|
|||
/* cpumask must be the first element so struct bpf_cpumask be cast to struct cpumask. */
|
||||
BUILD_BUG_ON(offsetof(struct bpf_cpumask, cpumask) != 0);
|
||||
|
||||
cpumask = bpf_mem_alloc(&bpf_cpumask_ma, sizeof(*cpumask));
|
||||
cpumask = bpf_mem_cache_alloc(&bpf_cpumask_ma);
|
||||
if (!cpumask)
|
||||
return NULL;
|
||||
|
||||
|
@ -123,7 +123,7 @@ __bpf_kfunc void bpf_cpumask_release(struct bpf_cpumask *cpumask)
|
|||
|
||||
if (refcount_dec_and_test(&cpumask->usage)) {
|
||||
migrate_disable();
|
||||
bpf_mem_free(&bpf_cpumask_ma, cpumask);
|
||||
bpf_mem_cache_free(&bpf_cpumask_ma, cpumask);
|
||||
migrate_enable();
|
||||
}
|
||||
}
|
||||
|
@ -427,26 +427,26 @@ BTF_ID_FLAGS(func, bpf_cpumask_create, KF_ACQUIRE | KF_RET_NULL)
|
|||
BTF_ID_FLAGS(func, bpf_cpumask_release, KF_RELEASE | KF_TRUSTED_ARGS)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_acquire, KF_ACQUIRE | KF_TRUSTED_ARGS)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_kptr_get, KF_ACQUIRE | KF_KPTR_GET | KF_RET_NULL)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_first, KF_TRUSTED_ARGS)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_first_zero, KF_TRUSTED_ARGS)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_set_cpu, KF_TRUSTED_ARGS)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_clear_cpu, KF_TRUSTED_ARGS)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_test_cpu, KF_TRUSTED_ARGS)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_test_and_set_cpu, KF_TRUSTED_ARGS)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_test_and_clear_cpu, KF_TRUSTED_ARGS)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_setall, KF_TRUSTED_ARGS)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_clear, KF_TRUSTED_ARGS)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_and, KF_TRUSTED_ARGS)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_or, KF_TRUSTED_ARGS)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_xor, KF_TRUSTED_ARGS)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_equal, KF_TRUSTED_ARGS)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_intersects, KF_TRUSTED_ARGS)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_subset, KF_TRUSTED_ARGS)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_empty, KF_TRUSTED_ARGS)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_full, KF_TRUSTED_ARGS)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_copy, KF_TRUSTED_ARGS)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_any, KF_TRUSTED_ARGS)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_any_and, KF_TRUSTED_ARGS)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_first, KF_RCU)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_first_zero, KF_RCU)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_set_cpu, KF_RCU)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_clear_cpu, KF_RCU)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_test_cpu, KF_RCU)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_test_and_set_cpu, KF_RCU)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_test_and_clear_cpu, KF_RCU)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_setall, KF_RCU)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_clear, KF_RCU)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_and, KF_RCU)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_or, KF_RCU)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_xor, KF_RCU)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_equal, KF_RCU)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_intersects, KF_RCU)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_subset, KF_RCU)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_empty, KF_RCU)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_full, KF_RCU)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_copy, KF_RCU)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_any, KF_RCU)
|
||||
BTF_ID_FLAGS(func, bpf_cpumask_any_and, KF_RCU)
|
||||
BTF_SET8_END(cpumask_kfunc_btf_ids)
|
||||
|
||||
static const struct btf_kfunc_id_set cpumask_kfunc_set = {
|
||||
|
@ -468,7 +468,7 @@ static int __init cpumask_kfunc_init(void)
|
|||
},
|
||||
};
|
||||
|
||||
ret = bpf_mem_alloc_init(&bpf_cpumask_ma, 0, false);
|
||||
ret = bpf_mem_alloc_init(&bpf_cpumask_ma, sizeof(struct bpf_cpumask), false);
|
||||
ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &cpumask_kfunc_set);
|
||||
ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_STRUCT_OPS, &cpumask_kfunc_set);
|
||||
return ret ?: register_btf_id_dtor_kfuncs(cpumask_dtors,
|
||||
|
|
|
@ -249,7 +249,18 @@ static void htab_free_prealloced_fields(struct bpf_htab *htab)
|
|||
struct htab_elem *elem;
|
||||
|
||||
elem = get_htab_elem(htab, i);
|
||||
bpf_obj_free_fields(htab->map.record, elem->key + round_up(htab->map.key_size, 8));
|
||||
if (htab_is_percpu(htab)) {
|
||||
void __percpu *pptr = htab_elem_get_ptr(elem, htab->map.key_size);
|
||||
int cpu;
|
||||
|
||||
for_each_possible_cpu(cpu) {
|
||||
bpf_obj_free_fields(htab->map.record, per_cpu_ptr(pptr, cpu));
|
||||
cond_resched();
|
||||
}
|
||||
} else {
|
||||
bpf_obj_free_fields(htab->map.record, elem->key + round_up(htab->map.key_size, 8));
|
||||
cond_resched();
|
||||
}
|
||||
cond_resched();
|
||||
}
|
||||
}
|
||||
|
@ -759,9 +770,17 @@ static int htab_lru_map_gen_lookup(struct bpf_map *map,
|
|||
static void check_and_free_fields(struct bpf_htab *htab,
|
||||
struct htab_elem *elem)
|
||||
{
|
||||
void *map_value = elem->key + round_up(htab->map.key_size, 8);
|
||||
if (htab_is_percpu(htab)) {
|
||||
void __percpu *pptr = htab_elem_get_ptr(elem, htab->map.key_size);
|
||||
int cpu;
|
||||
|
||||
bpf_obj_free_fields(htab->map.record, map_value);
|
||||
for_each_possible_cpu(cpu)
|
||||
bpf_obj_free_fields(htab->map.record, per_cpu_ptr(pptr, cpu));
|
||||
} else {
|
||||
void *map_value = elem->key + round_up(htab->map.key_size, 8);
|
||||
|
||||
bpf_obj_free_fields(htab->map.record, map_value);
|
||||
}
|
||||
}
|
||||
|
||||
/* It is called from the bpf_lru_list when the LRU needs to delete
|
||||
|
@ -858,9 +877,9 @@ find_first_elem:
|
|||
|
||||
static void htab_elem_free(struct bpf_htab *htab, struct htab_elem *l)
|
||||
{
|
||||
check_and_free_fields(htab, l);
|
||||
if (htab->map.map_type == BPF_MAP_TYPE_PERCPU_HASH)
|
||||
bpf_mem_cache_free(&htab->pcpu_ma, l->ptr_to_pptr);
|
||||
check_and_free_fields(htab, l);
|
||||
bpf_mem_cache_free(&htab->ma, l);
|
||||
}
|
||||
|
||||
|
@ -918,14 +937,13 @@ static void pcpu_copy_value(struct bpf_htab *htab, void __percpu *pptr,
|
|||
{
|
||||
if (!onallcpus) {
|
||||
/* copy true value_size bytes */
|
||||
memcpy(this_cpu_ptr(pptr), value, htab->map.value_size);
|
||||
copy_map_value(&htab->map, this_cpu_ptr(pptr), value);
|
||||
} else {
|
||||
u32 size = round_up(htab->map.value_size, 8);
|
||||
int off = 0, cpu;
|
||||
|
||||
for_each_possible_cpu(cpu) {
|
||||
bpf_long_memcpy(per_cpu_ptr(pptr, cpu),
|
||||
value + off, size);
|
||||
copy_map_value_long(&htab->map, per_cpu_ptr(pptr, cpu), value + off);
|
||||
off += size;
|
||||
}
|
||||
}
|
||||
|
@ -940,16 +958,14 @@ static void pcpu_init_value(struct bpf_htab *htab, void __percpu *pptr,
|
|||
* (onallcpus=false always when coming from bpf prog).
|
||||
*/
|
||||
if (!onallcpus) {
|
||||
u32 size = round_up(htab->map.value_size, 8);
|
||||
int current_cpu = raw_smp_processor_id();
|
||||
int cpu;
|
||||
|
||||
for_each_possible_cpu(cpu) {
|
||||
if (cpu == current_cpu)
|
||||
bpf_long_memcpy(per_cpu_ptr(pptr, cpu), value,
|
||||
size);
|
||||
else
|
||||
memset(per_cpu_ptr(pptr, cpu), 0, size);
|
||||
copy_map_value_long(&htab->map, per_cpu_ptr(pptr, cpu), value);
|
||||
else /* Since elem is preallocated, we cannot touch special fields */
|
||||
zero_map_value(&htab->map, per_cpu_ptr(pptr, cpu));
|
||||
}
|
||||
} else {
|
||||
pcpu_copy_value(htab, pptr, value, onallcpus);
|
||||
|
@ -1575,9 +1591,8 @@ static int __htab_map_lookup_and_delete_elem(struct bpf_map *map, void *key,
|
|||
|
||||
pptr = htab_elem_get_ptr(l, key_size);
|
||||
for_each_possible_cpu(cpu) {
|
||||
bpf_long_memcpy(value + off,
|
||||
per_cpu_ptr(pptr, cpu),
|
||||
roundup_value_size);
|
||||
copy_map_value_long(&htab->map, value + off, per_cpu_ptr(pptr, cpu));
|
||||
check_and_init_map_value(&htab->map, value + off);
|
||||
off += roundup_value_size;
|
||||
}
|
||||
} else {
|
||||
|
@ -1772,8 +1787,8 @@ again_nocopy:
|
|||
|
||||
pptr = htab_elem_get_ptr(l, map->key_size);
|
||||
for_each_possible_cpu(cpu) {
|
||||
bpf_long_memcpy(dst_val + off,
|
||||
per_cpu_ptr(pptr, cpu), size);
|
||||
copy_map_value_long(&htab->map, dst_val + off, per_cpu_ptr(pptr, cpu));
|
||||
check_and_init_map_value(&htab->map, dst_val + off);
|
||||
off += size;
|
||||
}
|
||||
} else {
|
||||
|
@ -2046,9 +2061,9 @@ static int __bpf_hash_map_seq_show(struct seq_file *seq, struct htab_elem *elem)
|
|||
roundup_value_size = round_up(map->value_size, 8);
|
||||
pptr = htab_elem_get_ptr(elem, map->key_size);
|
||||
for_each_possible_cpu(cpu) {
|
||||
bpf_long_memcpy(info->percpu_value_buf + off,
|
||||
per_cpu_ptr(pptr, cpu),
|
||||
roundup_value_size);
|
||||
copy_map_value_long(map, info->percpu_value_buf + off,
|
||||
per_cpu_ptr(pptr, cpu));
|
||||
check_and_init_map_value(map, info->percpu_value_buf + off);
|
||||
off += roundup_value_size;
|
||||
}
|
||||
ctx.value = info->percpu_value_buf;
|
||||
|
@ -2292,8 +2307,8 @@ int bpf_percpu_hash_copy(struct bpf_map *map, void *key, void *value)
|
|||
*/
|
||||
pptr = htab_elem_get_ptr(l, map->key_size);
|
||||
for_each_possible_cpu(cpu) {
|
||||
bpf_long_memcpy(value + off,
|
||||
per_cpu_ptr(pptr, cpu), size);
|
||||
copy_map_value_long(map, value + off, per_cpu_ptr(pptr, cpu));
|
||||
check_and_init_map_value(map, value + off);
|
||||
off += size;
|
||||
}
|
||||
ret = 0;
|
||||
|
|
|
@ -1264,10 +1264,11 @@ BPF_CALL_3(bpf_timer_start, struct bpf_timer_kern *, timer, u64, nsecs, u64, fla
|
|||
{
|
||||
struct bpf_hrtimer *t;
|
||||
int ret = 0;
|
||||
enum hrtimer_mode mode;
|
||||
|
||||
if (in_nmi())
|
||||
return -EOPNOTSUPP;
|
||||
if (flags)
|
||||
if (flags > BPF_F_TIMER_ABS)
|
||||
return -EINVAL;
|
||||
__bpf_spin_lock_irqsave(&timer->lock);
|
||||
t = timer->timer;
|
||||
|
@ -1275,7 +1276,13 @@ BPF_CALL_3(bpf_timer_start, struct bpf_timer_kern *, timer, u64, nsecs, u64, fla
|
|||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
hrtimer_start(&t->timer, ns_to_ktime(nsecs), HRTIMER_MODE_REL_SOFT);
|
||||
|
||||
if (flags & BPF_F_TIMER_ABS)
|
||||
mode = HRTIMER_MODE_ABS_SOFT;
|
||||
else
|
||||
mode = HRTIMER_MODE_REL_SOFT;
|
||||
|
||||
hrtimer_start(&t->timer, ns_to_ktime(nsecs), mode);
|
||||
out:
|
||||
__bpf_spin_unlock_irqrestore(&timer->lock);
|
||||
return ret;
|
||||
|
@ -1420,11 +1427,21 @@ static bool bpf_dynptr_is_rdonly(const struct bpf_dynptr_kern *ptr)
|
|||
return ptr->size & DYNPTR_RDONLY_BIT;
|
||||
}
|
||||
|
||||
void bpf_dynptr_set_rdonly(struct bpf_dynptr_kern *ptr)
|
||||
{
|
||||
ptr->size |= DYNPTR_RDONLY_BIT;
|
||||
}
|
||||
|
||||
static void bpf_dynptr_set_type(struct bpf_dynptr_kern *ptr, enum bpf_dynptr_type type)
|
||||
{
|
||||
ptr->size |= type << DYNPTR_TYPE_SHIFT;
|
||||
}
|
||||
|
||||
static enum bpf_dynptr_type bpf_dynptr_get_type(const struct bpf_dynptr_kern *ptr)
|
||||
{
|
||||
return (ptr->size & ~(DYNPTR_RDONLY_BIT)) >> DYNPTR_TYPE_SHIFT;
|
||||
}
|
||||
|
||||
u32 bpf_dynptr_get_size(const struct bpf_dynptr_kern *ptr)
|
||||
{
|
||||
return ptr->size & DYNPTR_SIZE_MASK;
|
||||
|
@ -1497,6 +1514,7 @@ static const struct bpf_func_proto bpf_dynptr_from_mem_proto = {
|
|||
BPF_CALL_5(bpf_dynptr_read, void *, dst, u32, len, const struct bpf_dynptr_kern *, src,
|
||||
u32, offset, u64, flags)
|
||||
{
|
||||
enum bpf_dynptr_type type;
|
||||
int err;
|
||||
|
||||
if (!src->data || flags)
|
||||
|
@ -1506,13 +1524,25 @@ BPF_CALL_5(bpf_dynptr_read, void *, dst, u32, len, const struct bpf_dynptr_kern
|
|||
if (err)
|
||||
return err;
|
||||
|
||||
/* Source and destination may possibly overlap, hence use memmove to
|
||||
* copy the data. E.g. bpf_dynptr_from_mem may create two dynptr
|
||||
* pointing to overlapping PTR_TO_MAP_VALUE regions.
|
||||
*/
|
||||
memmove(dst, src->data + src->offset + offset, len);
|
||||
type = bpf_dynptr_get_type(src);
|
||||
|
||||
return 0;
|
||||
switch (type) {
|
||||
case BPF_DYNPTR_TYPE_LOCAL:
|
||||
case BPF_DYNPTR_TYPE_RINGBUF:
|
||||
/* Source and destination may possibly overlap, hence use memmove to
|
||||
* copy the data. E.g. bpf_dynptr_from_mem may create two dynptr
|
||||
* pointing to overlapping PTR_TO_MAP_VALUE regions.
|
||||
*/
|
||||
memmove(dst, src->data + src->offset + offset, len);
|
||||
return 0;
|
||||
case BPF_DYNPTR_TYPE_SKB:
|
||||
return __bpf_skb_load_bytes(src->data, src->offset + offset, dst, len);
|
||||
case BPF_DYNPTR_TYPE_XDP:
|
||||
return __bpf_xdp_load_bytes(src->data, src->offset + offset, dst, len);
|
||||
default:
|
||||
WARN_ONCE(true, "bpf_dynptr_read: unknown dynptr type %d\n", type);
|
||||
return -EFAULT;
|
||||
}
|
||||
}
|
||||
|
||||
static const struct bpf_func_proto bpf_dynptr_read_proto = {
|
||||
|
@ -1529,22 +1559,40 @@ static const struct bpf_func_proto bpf_dynptr_read_proto = {
|
|||
BPF_CALL_5(bpf_dynptr_write, const struct bpf_dynptr_kern *, dst, u32, offset, void *, src,
|
||||
u32, len, u64, flags)
|
||||
{
|
||||
enum bpf_dynptr_type type;
|
||||
int err;
|
||||
|
||||
if (!dst->data || flags || bpf_dynptr_is_rdonly(dst))
|
||||
if (!dst->data || bpf_dynptr_is_rdonly(dst))
|
||||
return -EINVAL;
|
||||
|
||||
err = bpf_dynptr_check_off_len(dst, offset, len);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
/* Source and destination may possibly overlap, hence use memmove to
|
||||
* copy the data. E.g. bpf_dynptr_from_mem may create two dynptr
|
||||
* pointing to overlapping PTR_TO_MAP_VALUE regions.
|
||||
*/
|
||||
memmove(dst->data + dst->offset + offset, src, len);
|
||||
type = bpf_dynptr_get_type(dst);
|
||||
|
||||
return 0;
|
||||
switch (type) {
|
||||
case BPF_DYNPTR_TYPE_LOCAL:
|
||||
case BPF_DYNPTR_TYPE_RINGBUF:
|
||||
if (flags)
|
||||
return -EINVAL;
|
||||
/* Source and destination may possibly overlap, hence use memmove to
|
||||
* copy the data. E.g. bpf_dynptr_from_mem may create two dynptr
|
||||
* pointing to overlapping PTR_TO_MAP_VALUE regions.
|
||||
*/
|
||||
memmove(dst->data + dst->offset + offset, src, len);
|
||||
return 0;
|
||||
case BPF_DYNPTR_TYPE_SKB:
|
||||
return __bpf_skb_store_bytes(dst->data, dst->offset + offset, src, len,
|
||||
flags);
|
||||
case BPF_DYNPTR_TYPE_XDP:
|
||||
if (flags)
|
||||
return -EINVAL;
|
||||
return __bpf_xdp_store_bytes(dst->data, dst->offset + offset, src, len);
|
||||
default:
|
||||
WARN_ONCE(true, "bpf_dynptr_write: unknown dynptr type %d\n", type);
|
||||
return -EFAULT;
|
||||
}
|
||||
}
|
||||
|
||||
static const struct bpf_func_proto bpf_dynptr_write_proto = {
|
||||
|
@ -1560,6 +1608,7 @@ static const struct bpf_func_proto bpf_dynptr_write_proto = {
|
|||
|
||||
BPF_CALL_3(bpf_dynptr_data, const struct bpf_dynptr_kern *, ptr, u32, offset, u32, len)
|
||||
{
|
||||
enum bpf_dynptr_type type;
|
||||
int err;
|
||||
|
||||
if (!ptr->data)
|
||||
|
@ -1572,7 +1621,20 @@ BPF_CALL_3(bpf_dynptr_data, const struct bpf_dynptr_kern *, ptr, u32, offset, u3
|
|||
if (bpf_dynptr_is_rdonly(ptr))
|
||||
return 0;
|
||||
|
||||
return (unsigned long)(ptr->data + ptr->offset + offset);
|
||||
type = bpf_dynptr_get_type(ptr);
|
||||
|
||||
switch (type) {
|
||||
case BPF_DYNPTR_TYPE_LOCAL:
|
||||
case BPF_DYNPTR_TYPE_RINGBUF:
|
||||
return (unsigned long)(ptr->data + ptr->offset + offset);
|
||||
case BPF_DYNPTR_TYPE_SKB:
|
||||
case BPF_DYNPTR_TYPE_XDP:
|
||||
/* skb and xdp dynptrs should use bpf_dynptr_slice / bpf_dynptr_slice_rdwr */
|
||||
return 0;
|
||||
default:
|
||||
WARN_ONCE(true, "bpf_dynptr_data: unknown dynptr type %d\n", type);
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
static const struct bpf_func_proto bpf_dynptr_data_proto = {
|
||||
|
@ -1693,6 +1755,10 @@ bpf_base_func_proto(enum bpf_func_id func_id)
|
|||
return &bpf_cgrp_storage_get_proto;
|
||||
case BPF_FUNC_cgrp_storage_delete:
|
||||
return &bpf_cgrp_storage_delete_proto;
|
||||
case BPF_FUNC_get_current_cgroup_id:
|
||||
return &bpf_get_current_cgroup_id_proto;
|
||||
case BPF_FUNC_get_current_ancestor_cgroup_id:
|
||||
return &bpf_get_current_ancestor_cgroup_id_proto;
|
||||
#endif
|
||||
default:
|
||||
break;
|
||||
|
@ -2097,10 +2163,28 @@ __bpf_kfunc struct cgroup *bpf_cgroup_ancestor(struct cgroup *cgrp, int level)
|
|||
if (level > cgrp->level || level < 0)
|
||||
return NULL;
|
||||
|
||||
/* cgrp's refcnt could be 0 here, but ancestors can still be accessed */
|
||||
ancestor = cgrp->ancestors[level];
|
||||
cgroup_get(ancestor);
|
||||
if (!cgroup_tryget(ancestor))
|
||||
return NULL;
|
||||
return ancestor;
|
||||
}
|
||||
|
||||
/**
|
||||
* bpf_cgroup_from_id - Find a cgroup from its ID. A cgroup returned by this
|
||||
* kfunc which is not subsequently stored in a map, must be released by calling
|
||||
* bpf_cgroup_release().
|
||||
* @cgid: cgroup id.
|
||||
*/
|
||||
__bpf_kfunc struct cgroup *bpf_cgroup_from_id(u64 cgid)
|
||||
{
|
||||
struct cgroup *cgrp;
|
||||
|
||||
cgrp = cgroup_get_from_id(cgid);
|
||||
if (IS_ERR(cgrp))
|
||||
return NULL;
|
||||
return cgrp;
|
||||
}
|
||||
#endif /* CONFIG_CGROUPS */
|
||||
|
||||
/**
|
||||
|
@ -2122,6 +2206,140 @@ __bpf_kfunc struct task_struct *bpf_task_from_pid(s32 pid)
|
|||
return p;
|
||||
}
|
||||
|
||||
/**
|
||||
* bpf_dynptr_slice() - Obtain a read-only pointer to the dynptr data.
|
||||
* @ptr: The dynptr whose data slice to retrieve
|
||||
* @offset: Offset into the dynptr
|
||||
* @buffer: User-provided buffer to copy contents into
|
||||
* @buffer__szk: Size (in bytes) of the buffer. This is the length of the
|
||||
* requested slice. This must be a constant.
|
||||
*
|
||||
* For non-skb and non-xdp type dynptrs, there is no difference between
|
||||
* bpf_dynptr_slice and bpf_dynptr_data.
|
||||
*
|
||||
* If the intention is to write to the data slice, please use
|
||||
* bpf_dynptr_slice_rdwr.
|
||||
*
|
||||
* The user must check that the returned pointer is not null before using it.
|
||||
*
|
||||
* Please note that in the case of skb and xdp dynptrs, bpf_dynptr_slice
|
||||
* does not change the underlying packet data pointers, so a call to
|
||||
* bpf_dynptr_slice will not invalidate any ctx->data/data_end pointers in
|
||||
* the bpf program.
|
||||
*
|
||||
* Return: NULL if the call failed (eg invalid dynptr), pointer to a read-only
|
||||
* data slice (can be either direct pointer to the data or a pointer to the user
|
||||
* provided buffer, with its contents containing the data, if unable to obtain
|
||||
* direct pointer)
|
||||
*/
|
||||
__bpf_kfunc void *bpf_dynptr_slice(const struct bpf_dynptr_kern *ptr, u32 offset,
|
||||
void *buffer, u32 buffer__szk)
|
||||
{
|
||||
enum bpf_dynptr_type type;
|
||||
u32 len = buffer__szk;
|
||||
int err;
|
||||
|
||||
if (!ptr->data)
|
||||
return NULL;
|
||||
|
||||
err = bpf_dynptr_check_off_len(ptr, offset, len);
|
||||
if (err)
|
||||
return NULL;
|
||||
|
||||
type = bpf_dynptr_get_type(ptr);
|
||||
|
||||
switch (type) {
|
||||
case BPF_DYNPTR_TYPE_LOCAL:
|
||||
case BPF_DYNPTR_TYPE_RINGBUF:
|
||||
return ptr->data + ptr->offset + offset;
|
||||
case BPF_DYNPTR_TYPE_SKB:
|
||||
return skb_header_pointer(ptr->data, ptr->offset + offset, len, buffer);
|
||||
case BPF_DYNPTR_TYPE_XDP:
|
||||
{
|
||||
void *xdp_ptr = bpf_xdp_pointer(ptr->data, ptr->offset + offset, len);
|
||||
if (xdp_ptr)
|
||||
return xdp_ptr;
|
||||
|
||||
bpf_xdp_copy_buf(ptr->data, ptr->offset + offset, buffer, len, false);
|
||||
return buffer;
|
||||
}
|
||||
default:
|
||||
WARN_ONCE(true, "unknown dynptr type %d\n", type);
|
||||
return NULL;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* bpf_dynptr_slice_rdwr() - Obtain a writable pointer to the dynptr data.
|
||||
* @ptr: The dynptr whose data slice to retrieve
|
||||
* @offset: Offset into the dynptr
|
||||
* @buffer: User-provided buffer to copy contents into
|
||||
* @buffer__szk: Size (in bytes) of the buffer. This is the length of the
|
||||
* requested slice. This must be a constant.
|
||||
*
|
||||
* For non-skb and non-xdp type dynptrs, there is no difference between
|
||||
* bpf_dynptr_slice and bpf_dynptr_data.
|
||||
*
|
||||
* The returned pointer is writable and may point to either directly the dynptr
|
||||
* data at the requested offset or to the buffer if unable to obtain a direct
|
||||
* data pointer to (example: the requested slice is to the paged area of an skb
|
||||
* packet). In the case where the returned pointer is to the buffer, the user
|
||||
* is responsible for persisting writes through calling bpf_dynptr_write(). This
|
||||
* usually looks something like this pattern:
|
||||
*
|
||||
* struct eth_hdr *eth = bpf_dynptr_slice_rdwr(&dynptr, 0, buffer, sizeof(buffer));
|
||||
* if (!eth)
|
||||
* return TC_ACT_SHOT;
|
||||
*
|
||||
* // mutate eth header //
|
||||
*
|
||||
* if (eth == buffer)
|
||||
* bpf_dynptr_write(&ptr, 0, buffer, sizeof(buffer), 0);
|
||||
*
|
||||
* Please note that, as in the example above, the user must check that the
|
||||
* returned pointer is not null before using it.
|
||||
*
|
||||
* Please also note that in the case of skb and xdp dynptrs, bpf_dynptr_slice_rdwr
|
||||
* does not change the underlying packet data pointers, so a call to
|
||||
* bpf_dynptr_slice_rdwr will not invalidate any ctx->data/data_end pointers in
|
||||
* the bpf program.
|
||||
*
|
||||
* Return: NULL if the call failed (eg invalid dynptr), pointer to a
|
||||
* data slice (can be either direct pointer to the data or a pointer to the user
|
||||
* provided buffer, with its contents containing the data, if unable to obtain
|
||||
* direct pointer)
|
||||
*/
|
||||
__bpf_kfunc void *bpf_dynptr_slice_rdwr(const struct bpf_dynptr_kern *ptr, u32 offset,
|
||||
void *buffer, u32 buffer__szk)
|
||||
{
|
||||
if (!ptr->data || bpf_dynptr_is_rdonly(ptr))
|
||||
return NULL;
|
||||
|
||||
/* bpf_dynptr_slice_rdwr is the same logic as bpf_dynptr_slice.
|
||||
*
|
||||
* For skb-type dynptrs, it is safe to write into the returned pointer
|
||||
* if the bpf program allows skb data writes. There are two possiblities
|
||||
* that may occur when calling bpf_dynptr_slice_rdwr:
|
||||
*
|
||||
* 1) The requested slice is in the head of the skb. In this case, the
|
||||
* returned pointer is directly to skb data, and if the skb is cloned, the
|
||||
* verifier will have uncloned it (see bpf_unclone_prologue()) already.
|
||||
* The pointer can be directly written into.
|
||||
*
|
||||
* 2) Some portion of the requested slice is in the paged buffer area.
|
||||
* In this case, the requested data will be copied out into the buffer
|
||||
* and the returned pointer will be a pointer to the buffer. The skb
|
||||
* will not be pulled. To persist the write, the user will need to call
|
||||
* bpf_dynptr_write(), which will pull the skb and commit the write.
|
||||
*
|
||||
* Similarly for xdp programs, if the requested slice is not across xdp
|
||||
* fragments, then a direct pointer will be returned, otherwise the data
|
||||
* will be copied out into the buffer and the user will need to call
|
||||
* bpf_dynptr_write() to commit changes.
|
||||
*/
|
||||
return bpf_dynptr_slice(ptr, offset, buffer, buffer__szk);
|
||||
}
|
||||
|
||||
__bpf_kfunc void *bpf_cast_to_kern_ctx(void *obj)
|
||||
{
|
||||
return obj;
|
||||
|
@ -2166,7 +2384,8 @@ BTF_ID_FLAGS(func, bpf_rbtree_first, KF_RET_NULL)
|
|||
BTF_ID_FLAGS(func, bpf_cgroup_acquire, KF_ACQUIRE | KF_TRUSTED_ARGS)
|
||||
BTF_ID_FLAGS(func, bpf_cgroup_kptr_get, KF_ACQUIRE | KF_KPTR_GET | KF_RET_NULL)
|
||||
BTF_ID_FLAGS(func, bpf_cgroup_release, KF_RELEASE)
|
||||
BTF_ID_FLAGS(func, bpf_cgroup_ancestor, KF_ACQUIRE | KF_TRUSTED_ARGS | KF_RET_NULL)
|
||||
BTF_ID_FLAGS(func, bpf_cgroup_ancestor, KF_ACQUIRE | KF_RCU | KF_RET_NULL)
|
||||
BTF_ID_FLAGS(func, bpf_cgroup_from_id, KF_ACQUIRE | KF_RET_NULL)
|
||||
#endif
|
||||
BTF_ID_FLAGS(func, bpf_task_from_pid, KF_ACQUIRE | KF_RET_NULL)
|
||||
BTF_SET8_END(generic_btf_ids)
|
||||
|
@ -2190,6 +2409,8 @@ BTF_ID_FLAGS(func, bpf_cast_to_kern_ctx)
|
|||
BTF_ID_FLAGS(func, bpf_rdonly_cast)
|
||||
BTF_ID_FLAGS(func, bpf_rcu_read_lock)
|
||||
BTF_ID_FLAGS(func, bpf_rcu_read_unlock)
|
||||
BTF_ID_FLAGS(func, bpf_dynptr_slice, KF_RET_NULL)
|
||||
BTF_ID_FLAGS(func, bpf_dynptr_slice_rdwr, KF_RET_NULL)
|
||||
BTF_SET8_END(common_btf_ids)
|
||||
|
||||
static const struct btf_kfunc_id_set common_kfunc_set = {
|
||||
|
|
|
@ -1059,9 +1059,15 @@ static int map_check_btf(struct bpf_map *map, const struct btf *btf,
|
|||
case BPF_KPTR_UNREF:
|
||||
case BPF_KPTR_REF:
|
||||
if (map->map_type != BPF_MAP_TYPE_HASH &&
|
||||
map->map_type != BPF_MAP_TYPE_PERCPU_HASH &&
|
||||
map->map_type != BPF_MAP_TYPE_LRU_HASH &&
|
||||
map->map_type != BPF_MAP_TYPE_LRU_PERCPU_HASH &&
|
||||
map->map_type != BPF_MAP_TYPE_ARRAY &&
|
||||
map->map_type != BPF_MAP_TYPE_PERCPU_ARRAY) {
|
||||
map->map_type != BPF_MAP_TYPE_PERCPU_ARRAY &&
|
||||
map->map_type != BPF_MAP_TYPE_SK_STORAGE &&
|
||||
map->map_type != BPF_MAP_TYPE_INODE_STORAGE &&
|
||||
map->map_type != BPF_MAP_TYPE_TASK_STORAGE &&
|
||||
map->map_type != BPF_MAP_TYPE_CGRP_STORAGE) {
|
||||
ret = -EOPNOTSUPP;
|
||||
goto free_map_tab;
|
||||
}
|
||||
|
|
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
|
@ -1453,10 +1453,6 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
|
|||
NULL : &bpf_probe_read_compat_str_proto;
|
||||
#endif
|
||||
#ifdef CONFIG_CGROUPS
|
||||
case BPF_FUNC_get_current_cgroup_id:
|
||||
return &bpf_get_current_cgroup_id_proto;
|
||||
case BPF_FUNC_get_current_ancestor_cgroup_id:
|
||||
return &bpf_get_current_ancestor_cgroup_id_proto;
|
||||
case BPF_FUNC_cgrp_storage_get:
|
||||
return &bpf_cgrp_storage_get_proto;
|
||||
case BPF_FUNC_cgrp_storage_delete:
|
||||
|
|
|
@ -737,6 +737,7 @@ __bpf_kfunc void bpf_kfunc_call_test_mem_len_fail2(u64 *mem, int len)
|
|||
|
||||
__bpf_kfunc void bpf_kfunc_call_test_ref(struct prog_test_ref_kfunc *p)
|
||||
{
|
||||
/* p != NULL, but p->cnt could be 0 */
|
||||
}
|
||||
|
||||
__bpf_kfunc void bpf_kfunc_call_test_destructive(void)
|
||||
|
@ -784,7 +785,7 @@ BTF_ID_FLAGS(func, bpf_kfunc_call_test_fail3)
|
|||
BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_pass1)
|
||||
BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_fail1)
|
||||
BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_fail2)
|
||||
BTF_ID_FLAGS(func, bpf_kfunc_call_test_ref, KF_TRUSTED_ARGS)
|
||||
BTF_ID_FLAGS(func, bpf_kfunc_call_test_ref, KF_TRUSTED_ARGS | KF_RCU)
|
||||
BTF_ID_FLAGS(func, bpf_kfunc_call_test_destructive, KF_DESTRUCTIVE)
|
||||
BTF_ID_FLAGS(func, bpf_kfunc_call_test_static_unused_arg)
|
||||
BTF_SET8_END(test_sk_check_kfunc_ids)
|
||||
|
|
|
@ -1721,6 +1721,12 @@ static const struct bpf_func_proto bpf_skb_store_bytes_proto = {
|
|||
.arg5_type = ARG_ANYTHING,
|
||||
};
|
||||
|
||||
int __bpf_skb_store_bytes(struct sk_buff *skb, u32 offset, const void *from,
|
||||
u32 len, u64 flags)
|
||||
{
|
||||
return ____bpf_skb_store_bytes(skb, offset, from, len, flags);
|
||||
}
|
||||
|
||||
BPF_CALL_4(bpf_skb_load_bytes, const struct sk_buff *, skb, u32, offset,
|
||||
void *, to, u32, len)
|
||||
{
|
||||
|
@ -1751,6 +1757,11 @@ static const struct bpf_func_proto bpf_skb_load_bytes_proto = {
|
|||
.arg4_type = ARG_CONST_SIZE,
|
||||
};
|
||||
|
||||
int __bpf_skb_load_bytes(const struct sk_buff *skb, u32 offset, void *to, u32 len)
|
||||
{
|
||||
return ____bpf_skb_load_bytes(skb, offset, to, len);
|
||||
}
|
||||
|
||||
BPF_CALL_4(bpf_flow_dissector_load_bytes,
|
||||
const struct bpf_flow_dissector *, ctx, u32, offset,
|
||||
void *, to, u32, len)
|
||||
|
@ -3828,7 +3839,7 @@ static const struct bpf_func_proto sk_skb_change_head_proto = {
|
|||
.arg3_type = ARG_ANYTHING,
|
||||
};
|
||||
|
||||
BPF_CALL_1(bpf_xdp_get_buff_len, struct xdp_buff*, xdp)
|
||||
BPF_CALL_1(bpf_xdp_get_buff_len, struct xdp_buff*, xdp)
|
||||
{
|
||||
return xdp_get_buff_len(xdp);
|
||||
}
|
||||
|
@ -3883,8 +3894,8 @@ static const struct bpf_func_proto bpf_xdp_adjust_head_proto = {
|
|||
.arg2_type = ARG_ANYTHING,
|
||||
};
|
||||
|
||||
static void bpf_xdp_copy_buf(struct xdp_buff *xdp, unsigned long off,
|
||||
void *buf, unsigned long len, bool flush)
|
||||
void bpf_xdp_copy_buf(struct xdp_buff *xdp, unsigned long off,
|
||||
void *buf, unsigned long len, bool flush)
|
||||
{
|
||||
unsigned long ptr_len, ptr_off = 0;
|
||||
skb_frag_t *next_frag, *end_frag;
|
||||
|
@ -3930,7 +3941,7 @@ static void bpf_xdp_copy_buf(struct xdp_buff *xdp, unsigned long off,
|
|||
}
|
||||
}
|
||||
|
||||
static void *bpf_xdp_pointer(struct xdp_buff *xdp, u32 offset, u32 len)
|
||||
void *bpf_xdp_pointer(struct xdp_buff *xdp, u32 offset, u32 len)
|
||||
{
|
||||
struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
|
||||
u32 size = xdp->data_end - xdp->data;
|
||||
|
@ -3988,6 +3999,11 @@ static const struct bpf_func_proto bpf_xdp_load_bytes_proto = {
|
|||
.arg4_type = ARG_CONST_SIZE,
|
||||
};
|
||||
|
||||
int __bpf_xdp_load_bytes(struct xdp_buff *xdp, u32 offset, void *buf, u32 len)
|
||||
{
|
||||
return ____bpf_xdp_load_bytes(xdp, offset, buf, len);
|
||||
}
|
||||
|
||||
BPF_CALL_4(bpf_xdp_store_bytes, struct xdp_buff *, xdp, u32, offset,
|
||||
void *, buf, u32, len)
|
||||
{
|
||||
|
@ -4015,6 +4031,11 @@ static const struct bpf_func_proto bpf_xdp_store_bytes_proto = {
|
|||
.arg4_type = ARG_CONST_SIZE,
|
||||
};
|
||||
|
||||
int __bpf_xdp_store_bytes(struct xdp_buff *xdp, u32 offset, void *buf, u32 len)
|
||||
{
|
||||
return ____bpf_xdp_store_bytes(xdp, offset, buf, len);
|
||||
}
|
||||
|
||||
static int bpf_xdp_frags_increase_tail(struct xdp_buff *xdp, int offset)
|
||||
{
|
||||
struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
|
||||
|
@ -8144,12 +8165,6 @@ sk_msg_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
|
|||
return &bpf_sk_storage_delete_proto;
|
||||
case BPF_FUNC_get_netns_cookie:
|
||||
return &bpf_get_netns_cookie_sk_msg_proto;
|
||||
#ifdef CONFIG_CGROUPS
|
||||
case BPF_FUNC_get_current_cgroup_id:
|
||||
return &bpf_get_current_cgroup_id_proto;
|
||||
case BPF_FUNC_get_current_ancestor_cgroup_id:
|
||||
return &bpf_get_current_ancestor_cgroup_id_proto;
|
||||
#endif
|
||||
#ifdef CONFIG_CGROUP_NET_CLASSID
|
||||
case BPF_FUNC_get_cgroup_classid:
|
||||
return &bpf_get_cgroup_classid_curr_proto;
|
||||
|
@ -9264,11 +9279,15 @@ static struct bpf_insn *bpf_convert_tstamp_write(const struct bpf_prog *prog,
|
|||
#endif
|
||||
|
||||
/* <store>: skb->tstamp = tstamp */
|
||||
*insn++ = BPF_STX_MEM(BPF_DW, skb_reg, value_reg,
|
||||
offsetof(struct sk_buff, tstamp));
|
||||
*insn++ = BPF_RAW_INSN(BPF_CLASS(si->code) | BPF_DW | BPF_MEM,
|
||||
skb_reg, value_reg, offsetof(struct sk_buff, tstamp), si->imm);
|
||||
return insn;
|
||||
}
|
||||
|
||||
#define BPF_EMIT_STORE(size, si, off) \
|
||||
BPF_RAW_INSN(BPF_CLASS((si)->code) | (size) | BPF_MEM, \
|
||||
(si)->dst_reg, (si)->src_reg, (off), (si)->imm)
|
||||
|
||||
static u32 bpf_convert_ctx_access(enum bpf_access_type type,
|
||||
const struct bpf_insn *si,
|
||||
struct bpf_insn *insn_buf,
|
||||
|
@ -9298,9 +9317,9 @@ static u32 bpf_convert_ctx_access(enum bpf_access_type type,
|
|||
|
||||
case offsetof(struct __sk_buff, priority):
|
||||
if (type == BPF_WRITE)
|
||||
*insn++ = BPF_STX_MEM(BPF_W, si->dst_reg, si->src_reg,
|
||||
bpf_target_off(struct sk_buff, priority, 4,
|
||||
target_size));
|
||||
*insn++ = BPF_EMIT_STORE(BPF_W, si,
|
||||
bpf_target_off(struct sk_buff, priority, 4,
|
||||
target_size));
|
||||
else
|
||||
*insn++ = BPF_LDX_MEM(BPF_W, si->dst_reg, si->src_reg,
|
||||
bpf_target_off(struct sk_buff, priority, 4,
|
||||
|
@ -9331,9 +9350,9 @@ static u32 bpf_convert_ctx_access(enum bpf_access_type type,
|
|||
|
||||
case offsetof(struct __sk_buff, mark):
|
||||
if (type == BPF_WRITE)
|
||||
*insn++ = BPF_STX_MEM(BPF_W, si->dst_reg, si->src_reg,
|
||||
bpf_target_off(struct sk_buff, mark, 4,
|
||||
target_size));
|
||||
*insn++ = BPF_EMIT_STORE(BPF_W, si,
|
||||
bpf_target_off(struct sk_buff, mark, 4,
|
||||
target_size));
|
||||
else
|
||||
*insn++ = BPF_LDX_MEM(BPF_W, si->dst_reg, si->src_reg,
|
||||
bpf_target_off(struct sk_buff, mark, 4,
|
||||
|
@ -9352,11 +9371,16 @@ static u32 bpf_convert_ctx_access(enum bpf_access_type type,
|
|||
|
||||
case offsetof(struct __sk_buff, queue_mapping):
|
||||
if (type == BPF_WRITE) {
|
||||
*insn++ = BPF_JMP_IMM(BPF_JGE, si->src_reg, NO_QUEUE_MAPPING, 1);
|
||||
*insn++ = BPF_STX_MEM(BPF_H, si->dst_reg, si->src_reg,
|
||||
bpf_target_off(struct sk_buff,
|
||||
queue_mapping,
|
||||
2, target_size));
|
||||
u32 off = bpf_target_off(struct sk_buff, queue_mapping, 2, target_size);
|
||||
|
||||
if (BPF_CLASS(si->code) == BPF_ST && si->imm >= NO_QUEUE_MAPPING) {
|
||||
*insn++ = BPF_JMP_A(0); /* noop */
|
||||
break;
|
||||
}
|
||||
|
||||
if (BPF_CLASS(si->code) == BPF_STX)
|
||||
*insn++ = BPF_JMP_IMM(BPF_JGE, si->src_reg, NO_QUEUE_MAPPING, 1);
|
||||
*insn++ = BPF_EMIT_STORE(BPF_H, si, off);
|
||||
} else {
|
||||
*insn++ = BPF_LDX_MEM(BPF_H, si->dst_reg, si->src_reg,
|
||||
bpf_target_off(struct sk_buff,
|
||||
|
@ -9392,8 +9416,7 @@ static u32 bpf_convert_ctx_access(enum bpf_access_type type,
|
|||
off += offsetof(struct sk_buff, cb);
|
||||
off += offsetof(struct qdisc_skb_cb, data);
|
||||
if (type == BPF_WRITE)
|
||||
*insn++ = BPF_STX_MEM(BPF_SIZE(si->code), si->dst_reg,
|
||||
si->src_reg, off);
|
||||
*insn++ = BPF_EMIT_STORE(BPF_SIZE(si->code), si, off);
|
||||
else
|
||||
*insn++ = BPF_LDX_MEM(BPF_SIZE(si->code), si->dst_reg,
|
||||
si->src_reg, off);
|
||||
|
@ -9408,8 +9431,7 @@ static u32 bpf_convert_ctx_access(enum bpf_access_type type,
|
|||
off += offsetof(struct qdisc_skb_cb, tc_classid);
|
||||
*target_size = 2;
|
||||
if (type == BPF_WRITE)
|
||||
*insn++ = BPF_STX_MEM(BPF_H, si->dst_reg,
|
||||
si->src_reg, off);
|
||||
*insn++ = BPF_EMIT_STORE(BPF_H, si, off);
|
||||
else
|
||||
*insn++ = BPF_LDX_MEM(BPF_H, si->dst_reg,
|
||||
si->src_reg, off);
|
||||
|
@ -9442,9 +9464,9 @@ static u32 bpf_convert_ctx_access(enum bpf_access_type type,
|
|||
case offsetof(struct __sk_buff, tc_index):
|
||||
#ifdef CONFIG_NET_SCHED
|
||||
if (type == BPF_WRITE)
|
||||
*insn++ = BPF_STX_MEM(BPF_H, si->dst_reg, si->src_reg,
|
||||
bpf_target_off(struct sk_buff, tc_index, 2,
|
||||
target_size));
|
||||
*insn++ = BPF_EMIT_STORE(BPF_H, si,
|
||||
bpf_target_off(struct sk_buff, tc_index, 2,
|
||||
target_size));
|
||||
else
|
||||
*insn++ = BPF_LDX_MEM(BPF_H, si->dst_reg, si->src_reg,
|
||||
bpf_target_off(struct sk_buff, tc_index, 2,
|
||||
|
@ -9645,8 +9667,8 @@ u32 bpf_sock_convert_ctx_access(enum bpf_access_type type,
|
|||
BUILD_BUG_ON(sizeof_field(struct sock, sk_bound_dev_if) != 4);
|
||||
|
||||
if (type == BPF_WRITE)
|
||||
*insn++ = BPF_STX_MEM(BPF_W, si->dst_reg, si->src_reg,
|
||||
offsetof(struct sock, sk_bound_dev_if));
|
||||
*insn++ = BPF_EMIT_STORE(BPF_W, si,
|
||||
offsetof(struct sock, sk_bound_dev_if));
|
||||
else
|
||||
*insn++ = BPF_LDX_MEM(BPF_W, si->dst_reg, si->src_reg,
|
||||
offsetof(struct sock, sk_bound_dev_if));
|
||||
|
@ -9656,8 +9678,8 @@ u32 bpf_sock_convert_ctx_access(enum bpf_access_type type,
|
|||
BUILD_BUG_ON(sizeof_field(struct sock, sk_mark) != 4);
|
||||
|
||||
if (type == BPF_WRITE)
|
||||
*insn++ = BPF_STX_MEM(BPF_W, si->dst_reg, si->src_reg,
|
||||
offsetof(struct sock, sk_mark));
|
||||
*insn++ = BPF_EMIT_STORE(BPF_W, si,
|
||||
offsetof(struct sock, sk_mark));
|
||||
else
|
||||
*insn++ = BPF_LDX_MEM(BPF_W, si->dst_reg, si->src_reg,
|
||||
offsetof(struct sock, sk_mark));
|
||||
|
@ -9667,8 +9689,8 @@ u32 bpf_sock_convert_ctx_access(enum bpf_access_type type,
|
|||
BUILD_BUG_ON(sizeof_field(struct sock, sk_priority) != 4);
|
||||
|
||||
if (type == BPF_WRITE)
|
||||
*insn++ = BPF_STX_MEM(BPF_W, si->dst_reg, si->src_reg,
|
||||
offsetof(struct sock, sk_priority));
|
||||
*insn++ = BPF_EMIT_STORE(BPF_W, si,
|
||||
offsetof(struct sock, sk_priority));
|
||||
else
|
||||
*insn++ = BPF_LDX_MEM(BPF_W, si->dst_reg, si->src_reg,
|
||||
offsetof(struct sock, sk_priority));
|
||||
|
@ -9933,10 +9955,12 @@ static u32 xdp_convert_ctx_access(enum bpf_access_type type,
|
|||
offsetof(S, TF)); \
|
||||
*insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(S, F), tmp_reg, \
|
||||
si->dst_reg, offsetof(S, F)); \
|
||||
*insn++ = BPF_STX_MEM(SIZE, tmp_reg, si->src_reg, \
|
||||
*insn++ = BPF_RAW_INSN(SIZE | BPF_MEM | BPF_CLASS(si->code), \
|
||||
tmp_reg, si->src_reg, \
|
||||
bpf_target_off(NS, NF, sizeof_field(NS, NF), \
|
||||
target_size) \
|
||||
+ OFF); \
|
||||
+ OFF, \
|
||||
si->imm); \
|
||||
*insn++ = BPF_LDX_MEM(BPF_DW, tmp_reg, si->dst_reg, \
|
||||
offsetof(S, TF)); \
|
||||
} while (0)
|
||||
|
@ -10171,9 +10195,11 @@ static u32 sock_ops_convert_ctx_access(enum bpf_access_type type,
|
|||
struct bpf_sock_ops_kern, sk),\
|
||||
reg, si->dst_reg, \
|
||||
offsetof(struct bpf_sock_ops_kern, sk));\
|
||||
*insn++ = BPF_STX_MEM(BPF_FIELD_SIZEOF(OBJ, OBJ_FIELD), \
|
||||
reg, si->src_reg, \
|
||||
offsetof(OBJ, OBJ_FIELD)); \
|
||||
*insn++ = BPF_RAW_INSN(BPF_FIELD_SIZEOF(OBJ, OBJ_FIELD) | \
|
||||
BPF_MEM | BPF_CLASS(si->code), \
|
||||
reg, si->src_reg, \
|
||||
offsetof(OBJ, OBJ_FIELD), \
|
||||
si->imm); \
|
||||
*insn++ = BPF_LDX_MEM(BPF_DW, reg, si->dst_reg, \
|
||||
offsetof(struct bpf_sock_ops_kern, \
|
||||
temp)); \
|
||||
|
@ -10205,8 +10231,7 @@ static u32 sock_ops_convert_ctx_access(enum bpf_access_type type,
|
|||
off -= offsetof(struct bpf_sock_ops, replylong[0]);
|
||||
off += offsetof(struct bpf_sock_ops_kern, replylong[0]);
|
||||
if (type == BPF_WRITE)
|
||||
*insn++ = BPF_STX_MEM(BPF_W, si->dst_reg, si->src_reg,
|
||||
off);
|
||||
*insn++ = BPF_EMIT_STORE(BPF_W, si, off);
|
||||
else
|
||||
*insn++ = BPF_LDX_MEM(BPF_W, si->dst_reg, si->src_reg,
|
||||
off);
|
||||
|
@ -10563,8 +10588,7 @@ static u32 sk_skb_convert_ctx_access(enum bpf_access_type type,
|
|||
off += offsetof(struct sk_buff, cb);
|
||||
off += offsetof(struct sk_skb_cb, data);
|
||||
if (type == BPF_WRITE)
|
||||
*insn++ = BPF_STX_MEM(BPF_SIZE(si->code), si->dst_reg,
|
||||
si->src_reg, off);
|
||||
*insn++ = BPF_EMIT_STORE(BPF_SIZE(si->code), si, off);
|
||||
else
|
||||
*insn++ = BPF_LDX_MEM(BPF_SIZE(si->code), si->dst_reg,
|
||||
si->src_reg, off);
|
||||
|
@ -11621,3 +11645,82 @@ bpf_sk_base_func_proto(enum bpf_func_id func_id)
|
|||
|
||||
return func;
|
||||
}
|
||||
|
||||
__diag_push();
|
||||
__diag_ignore_all("-Wmissing-prototypes",
|
||||
"Global functions as their definitions will be in vmlinux BTF");
|
||||
__bpf_kfunc int bpf_dynptr_from_skb(struct sk_buff *skb, u64 flags,
|
||||
struct bpf_dynptr_kern *ptr__uninit)
|
||||
{
|
||||
if (flags) {
|
||||
bpf_dynptr_set_null(ptr__uninit);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
bpf_dynptr_init(ptr__uninit, skb, BPF_DYNPTR_TYPE_SKB, 0, skb->len);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
__bpf_kfunc int bpf_dynptr_from_xdp(struct xdp_buff *xdp, u64 flags,
|
||||
struct bpf_dynptr_kern *ptr__uninit)
|
||||
{
|
||||
if (flags) {
|
||||
bpf_dynptr_set_null(ptr__uninit);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
bpf_dynptr_init(ptr__uninit, xdp, BPF_DYNPTR_TYPE_XDP, 0, xdp_get_buff_len(xdp));
|
||||
|
||||
return 0;
|
||||
}
|
||||
__diag_pop();
|
||||
|
||||
int bpf_dynptr_from_skb_rdonly(struct sk_buff *skb, u64 flags,
|
||||
struct bpf_dynptr_kern *ptr__uninit)
|
||||
{
|
||||
int err;
|
||||
|
||||
err = bpf_dynptr_from_skb(skb, flags, ptr__uninit);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
bpf_dynptr_set_rdonly(ptr__uninit);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
BTF_SET8_START(bpf_kfunc_check_set_skb)
|
||||
BTF_ID_FLAGS(func, bpf_dynptr_from_skb)
|
||||
BTF_SET8_END(bpf_kfunc_check_set_skb)
|
||||
|
||||
BTF_SET8_START(bpf_kfunc_check_set_xdp)
|
||||
BTF_ID_FLAGS(func, bpf_dynptr_from_xdp)
|
||||
BTF_SET8_END(bpf_kfunc_check_set_xdp)
|
||||
|
||||
static const struct btf_kfunc_id_set bpf_kfunc_set_skb = {
|
||||
.owner = THIS_MODULE,
|
||||
.set = &bpf_kfunc_check_set_skb,
|
||||
};
|
||||
|
||||
static const struct btf_kfunc_id_set bpf_kfunc_set_xdp = {
|
||||
.owner = THIS_MODULE,
|
||||
.set = &bpf_kfunc_check_set_xdp,
|
||||
};
|
||||
|
||||
static int __init bpf_kfunc_init(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_CLS, &bpf_kfunc_set_skb);
|
||||
ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_ACT, &bpf_kfunc_set_skb);
|
||||
ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SK_SKB, &bpf_kfunc_set_skb);
|
||||
ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SOCKET_FILTER, &bpf_kfunc_set_skb);
|
||||
ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_CGROUP_SKB, &bpf_kfunc_set_skb);
|
||||
ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_LWT_OUT, &bpf_kfunc_set_skb);
|
||||
ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_LWT_IN, &bpf_kfunc_set_skb);
|
||||
ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_LWT_XMIT, &bpf_kfunc_set_skb);
|
||||
ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_LWT_SEG6LOCAL, &bpf_kfunc_set_skb);
|
||||
return ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_XDP, &bpf_kfunc_set_xdp);
|
||||
}
|
||||
late_initcall(bpf_kfunc_init);
|
||||
|
|
|
@ -1,9 +0,0 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#ifndef _UAPI__ASM_BPF_PERF_EVENT_H__
|
||||
#define _UAPI__ASM_BPF_PERF_EVENT_H__
|
||||
|
||||
#include <asm/ptrace.h>
|
||||
|
||||
typedef struct user_pt_regs bpf_user_pt_regs_t;
|
||||
|
||||
#endif /* _UAPI__ASM_BPF_PERF_EVENT_H__ */
|
|
@ -1,9 +0,0 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#ifndef _UAPI__ASM_BPF_PERF_EVENT_H__
|
||||
#define _UAPI__ASM_BPF_PERF_EVENT_H__
|
||||
|
||||
#include "ptrace.h"
|
||||
|
||||
typedef user_pt_regs bpf_user_pt_regs_t;
|
||||
|
||||
#endif /* _UAPI__ASM_BPF_PERF_EVENT_H__ */
|
|
@ -1,458 +0,0 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
|
||||
/*
|
||||
* S390 version
|
||||
* Copyright IBM Corp. 1999, 2000
|
||||
* Author(s): Denis Joseph Barrow (djbarrow@de.ibm.com,barrow_dj@yahoo.com)
|
||||
*/
|
||||
|
||||
#ifndef _UAPI_S390_PTRACE_H
|
||||
#define _UAPI_S390_PTRACE_H
|
||||
|
||||
/*
|
||||
* Offsets in the user_regs_struct. They are used for the ptrace
|
||||
* system call and in entry.S
|
||||
*/
|
||||
#ifndef __s390x__
|
||||
|
||||
#define PT_PSWMASK 0x00
|
||||
#define PT_PSWADDR 0x04
|
||||
#define PT_GPR0 0x08
|
||||
#define PT_GPR1 0x0C
|
||||
#define PT_GPR2 0x10
|
||||
#define PT_GPR3 0x14
|
||||
#define PT_GPR4 0x18
|
||||
#define PT_GPR5 0x1C
|
||||
#define PT_GPR6 0x20
|
||||
#define PT_GPR7 0x24
|
||||
#define PT_GPR8 0x28
|
||||
#define PT_GPR9 0x2C
|
||||
#define PT_GPR10 0x30
|
||||
#define PT_GPR11 0x34
|
||||
#define PT_GPR12 0x38
|
||||
#define PT_GPR13 0x3C
|
||||
#define PT_GPR14 0x40
|
||||
#define PT_GPR15 0x44
|
||||
#define PT_ACR0 0x48
|
||||
#define PT_ACR1 0x4C
|
||||
#define PT_ACR2 0x50
|
||||
#define PT_ACR3 0x54
|
||||
#define PT_ACR4 0x58
|
||||
#define PT_ACR5 0x5C
|
||||
#define PT_ACR6 0x60
|
||||
#define PT_ACR7 0x64
|
||||
#define PT_ACR8 0x68
|
||||
#define PT_ACR9 0x6C
|
||||
#define PT_ACR10 0x70
|
||||
#define PT_ACR11 0x74
|
||||
#define PT_ACR12 0x78
|
||||
#define PT_ACR13 0x7C
|
||||
#define PT_ACR14 0x80
|
||||
#define PT_ACR15 0x84
|
||||
#define PT_ORIGGPR2 0x88
|
||||
#define PT_FPC 0x90
|
||||
/*
|
||||
* A nasty fact of life that the ptrace api
|
||||
* only supports passing of longs.
|
||||
*/
|
||||
#define PT_FPR0_HI 0x98
|
||||
#define PT_FPR0_LO 0x9C
|
||||
#define PT_FPR1_HI 0xA0
|
||||
#define PT_FPR1_LO 0xA4
|
||||
#define PT_FPR2_HI 0xA8
|
||||
#define PT_FPR2_LO 0xAC
|
||||
#define PT_FPR3_HI 0xB0
|
||||
#define PT_FPR3_LO 0xB4
|
||||
#define PT_FPR4_HI 0xB8
|
||||
#define PT_FPR4_LO 0xBC
|
||||
#define PT_FPR5_HI 0xC0
|
||||
#define PT_FPR5_LO 0xC4
|
||||
#define PT_FPR6_HI 0xC8
|
||||
#define PT_FPR6_LO 0xCC
|
||||
#define PT_FPR7_HI 0xD0
|
||||
#define PT_FPR7_LO 0xD4
|
||||
#define PT_FPR8_HI 0xD8
|
||||
#define PT_FPR8_LO 0XDC
|
||||
#define PT_FPR9_HI 0xE0
|
||||
#define PT_FPR9_LO 0xE4
|
||||
#define PT_FPR10_HI 0xE8
|
||||
#define PT_FPR10_LO 0xEC
|
||||
#define PT_FPR11_HI 0xF0
|
||||
#define PT_FPR11_LO 0xF4
|
||||
#define PT_FPR12_HI 0xF8
|
||||
#define PT_FPR12_LO 0xFC
|
||||
#define PT_FPR13_HI 0x100
|
||||
#define PT_FPR13_LO 0x104
|
||||
#define PT_FPR14_HI 0x108
|
||||
#define PT_FPR14_LO 0x10C
|
||||
#define PT_FPR15_HI 0x110
|
||||
#define PT_FPR15_LO 0x114
|
||||
#define PT_CR_9 0x118
|
||||
#define PT_CR_10 0x11C
|
||||
#define PT_CR_11 0x120
|
||||
#define PT_IEEE_IP 0x13C
|
||||
#define PT_LASTOFF PT_IEEE_IP
|
||||
#define PT_ENDREGS 0x140-1
|
||||
|
||||
#define GPR_SIZE 4
|
||||
#define CR_SIZE 4
|
||||
|
||||
#define STACK_FRAME_OVERHEAD 96 /* size of minimum stack frame */
|
||||
|
||||
#else /* __s390x__ */
|
||||
|
||||
#define PT_PSWMASK 0x00
|
||||
#define PT_PSWADDR 0x08
|
||||
#define PT_GPR0 0x10
|
||||
#define PT_GPR1 0x18
|
||||
#define PT_GPR2 0x20
|
||||
#define PT_GPR3 0x28
|
||||
#define PT_GPR4 0x30
|
||||
#define PT_GPR5 0x38
|
||||
#define PT_GPR6 0x40
|
||||
#define PT_GPR7 0x48
|
||||
#define PT_GPR8 0x50
|
||||
#define PT_GPR9 0x58
|
||||
#define PT_GPR10 0x60
|
||||
#define PT_GPR11 0x68
|
||||
#define PT_GPR12 0x70
|
||||
#define PT_GPR13 0x78
|
||||
#define PT_GPR14 0x80
|
||||
#define PT_GPR15 0x88
|
||||
#define PT_ACR0 0x90
|
||||
#define PT_ACR1 0x94
|
||||
#define PT_ACR2 0x98
|
||||
#define PT_ACR3 0x9C
|
||||
#define PT_ACR4 0xA0
|
||||
#define PT_ACR5 0xA4
|
||||
#define PT_ACR6 0xA8
|
||||
#define PT_ACR7 0xAC
|
||||
#define PT_ACR8 0xB0
|
||||
#define PT_ACR9 0xB4
|
||||
#define PT_ACR10 0xB8
|
||||
#define PT_ACR11 0xBC
|
||||
#define PT_ACR12 0xC0
|
||||
#define PT_ACR13 0xC4
|
||||
#define PT_ACR14 0xC8
|
||||
#define PT_ACR15 0xCC
|
||||
#define PT_ORIGGPR2 0xD0
|
||||
#define PT_FPC 0xD8
|
||||
#define PT_FPR0 0xE0
|
||||
#define PT_FPR1 0xE8
|
||||
#define PT_FPR2 0xF0
|
||||
#define PT_FPR3 0xF8
|
||||
#define PT_FPR4 0x100
|
||||
#define PT_FPR5 0x108
|
||||
#define PT_FPR6 0x110
|
||||
#define PT_FPR7 0x118
|
||||
#define PT_FPR8 0x120
|
||||
#define PT_FPR9 0x128
|
||||
#define PT_FPR10 0x130
|
||||
#define PT_FPR11 0x138
|
||||
#define PT_FPR12 0x140
|
||||
#define PT_FPR13 0x148
|
||||
#define PT_FPR14 0x150
|
||||
#define PT_FPR15 0x158
|
||||
#define PT_CR_9 0x160
|
||||
#define PT_CR_10 0x168
|
||||
#define PT_CR_11 0x170
|
||||
#define PT_IEEE_IP 0x1A8
|
||||
#define PT_LASTOFF PT_IEEE_IP
|
||||
#define PT_ENDREGS 0x1B0-1
|
||||
|
||||
#define GPR_SIZE 8
|
||||
#define CR_SIZE 8
|
||||
|
||||
#define STACK_FRAME_OVERHEAD 160 /* size of minimum stack frame */
|
||||
|
||||
#endif /* __s390x__ */
|
||||
|
||||
#define NUM_GPRS 16
|
||||
#define NUM_FPRS 16
|
||||
#define NUM_CRS 16
|
||||
#define NUM_ACRS 16
|
||||
|
||||
#define NUM_CR_WORDS 3
|
||||
|
||||
#define FPR_SIZE 8
|
||||
#define FPC_SIZE 4
|
||||
#define FPC_PAD_SIZE 4 /* gcc insists on aligning the fpregs */
|
||||
#define ACR_SIZE 4
|
||||
|
||||
|
||||
#define PTRACE_OLDSETOPTIONS 21
|
||||
#define PTRACE_SYSEMU 31
|
||||
#define PTRACE_SYSEMU_SINGLESTEP 32
|
||||
#ifndef __ASSEMBLY__
|
||||
#include <linux/stddef.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
typedef union {
|
||||
float f;
|
||||
double d;
|
||||
__u64 ui;
|
||||
struct
|
||||
{
|
||||
__u32 hi;
|
||||
__u32 lo;
|
||||
} fp;
|
||||
} freg_t;
|
||||
|
||||
typedef struct {
|
||||
__u32 fpc;
|
||||
__u32 pad;
|
||||
freg_t fprs[NUM_FPRS];
|
||||
} s390_fp_regs;
|
||||
|
||||
#define FPC_EXCEPTION_MASK 0xF8000000
|
||||
#define FPC_FLAGS_MASK 0x00F80000
|
||||
#define FPC_DXC_MASK 0x0000FF00
|
||||
#define FPC_RM_MASK 0x00000003
|
||||
|
||||
/* this typedef defines how a Program Status Word looks like */
|
||||
typedef struct {
|
||||
unsigned long mask;
|
||||
unsigned long addr;
|
||||
} __attribute__ ((aligned(8))) psw_t;
|
||||
|
||||
#ifndef __s390x__
|
||||
|
||||
#define PSW_MASK_PER 0x40000000UL
|
||||
#define PSW_MASK_DAT 0x04000000UL
|
||||
#define PSW_MASK_IO 0x02000000UL
|
||||
#define PSW_MASK_EXT 0x01000000UL
|
||||
#define PSW_MASK_KEY 0x00F00000UL
|
||||
#define PSW_MASK_BASE 0x00080000UL /* always one */
|
||||
#define PSW_MASK_MCHECK 0x00040000UL
|
||||
#define PSW_MASK_WAIT 0x00020000UL
|
||||
#define PSW_MASK_PSTATE 0x00010000UL
|
||||
#define PSW_MASK_ASC 0x0000C000UL
|
||||
#define PSW_MASK_CC 0x00003000UL
|
||||
#define PSW_MASK_PM 0x00000F00UL
|
||||
#define PSW_MASK_RI 0x00000000UL
|
||||
#define PSW_MASK_EA 0x00000000UL
|
||||
#define PSW_MASK_BA 0x00000000UL
|
||||
|
||||
#define PSW_MASK_USER 0x0000FF00UL
|
||||
|
||||
#define PSW_ADDR_AMODE 0x80000000UL
|
||||
#define PSW_ADDR_INSN 0x7FFFFFFFUL
|
||||
|
||||
#define PSW_DEFAULT_KEY (((unsigned long) PAGE_DEFAULT_ACC) << 20)
|
||||
|
||||
#define PSW_ASC_PRIMARY 0x00000000UL
|
||||
#define PSW_ASC_ACCREG 0x00004000UL
|
||||
#define PSW_ASC_SECONDARY 0x00008000UL
|
||||
#define PSW_ASC_HOME 0x0000C000UL
|
||||
|
||||
#else /* __s390x__ */
|
||||
|
||||
#define PSW_MASK_PER 0x4000000000000000UL
|
||||
#define PSW_MASK_DAT 0x0400000000000000UL
|
||||
#define PSW_MASK_IO 0x0200000000000000UL
|
||||
#define PSW_MASK_EXT 0x0100000000000000UL
|
||||
#define PSW_MASK_BASE 0x0000000000000000UL
|
||||
#define PSW_MASK_KEY 0x00F0000000000000UL
|
||||
#define PSW_MASK_MCHECK 0x0004000000000000UL
|
||||
#define PSW_MASK_WAIT 0x0002000000000000UL
|
||||
#define PSW_MASK_PSTATE 0x0001000000000000UL
|
||||
#define PSW_MASK_ASC 0x0000C00000000000UL
|
||||
#define PSW_MASK_CC 0x0000300000000000UL
|
||||
#define PSW_MASK_PM 0x00000F0000000000UL
|
||||
#define PSW_MASK_RI 0x0000008000000000UL
|
||||
#define PSW_MASK_EA 0x0000000100000000UL
|
||||
#define PSW_MASK_BA 0x0000000080000000UL
|
||||
|
||||
#define PSW_MASK_USER 0x0000FF0180000000UL
|
||||
|
||||
#define PSW_ADDR_AMODE 0x0000000000000000UL
|
||||
#define PSW_ADDR_INSN 0xFFFFFFFFFFFFFFFFUL
|
||||
|
||||
#define PSW_DEFAULT_KEY (((unsigned long) PAGE_DEFAULT_ACC) << 52)
|
||||
|
||||
#define PSW_ASC_PRIMARY 0x0000000000000000UL
|
||||
#define PSW_ASC_ACCREG 0x0000400000000000UL
|
||||
#define PSW_ASC_SECONDARY 0x0000800000000000UL
|
||||
#define PSW_ASC_HOME 0x0000C00000000000UL
|
||||
|
||||
#endif /* __s390x__ */
|
||||
|
||||
|
||||
/*
|
||||
* The s390_regs structure is used to define the elf_gregset_t.
|
||||
*/
|
||||
typedef struct {
|
||||
psw_t psw;
|
||||
unsigned long gprs[NUM_GPRS];
|
||||
unsigned int acrs[NUM_ACRS];
|
||||
unsigned long orig_gpr2;
|
||||
} s390_regs;
|
||||
|
||||
/*
|
||||
* The user_pt_regs structure exports the beginning of
|
||||
* the in-kernel pt_regs structure to user space.
|
||||
*/
|
||||
typedef struct {
|
||||
unsigned long args[1];
|
||||
psw_t psw;
|
||||
unsigned long gprs[NUM_GPRS];
|
||||
} user_pt_regs;
|
||||
|
||||
/*
|
||||
* Now for the user space program event recording (trace) definitions.
|
||||
* The following structures are used only for the ptrace interface, don't
|
||||
* touch or even look at it if you don't want to modify the user-space
|
||||
* ptrace interface. In particular stay away from it for in-kernel PER.
|
||||
*/
|
||||
typedef struct {
|
||||
unsigned long cr[NUM_CR_WORDS];
|
||||
} per_cr_words;
|
||||
|
||||
#define PER_EM_MASK 0xE8000000UL
|
||||
|
||||
typedef struct {
|
||||
#ifdef __s390x__
|
||||
unsigned : 32;
|
||||
#endif /* __s390x__ */
|
||||
unsigned em_branching : 1;
|
||||
unsigned em_instruction_fetch : 1;
|
||||
/*
|
||||
* Switching on storage alteration automatically fixes
|
||||
* the storage alteration event bit in the users std.
|
||||
*/
|
||||
unsigned em_storage_alteration : 1;
|
||||
unsigned em_gpr_alt_unused : 1;
|
||||
unsigned em_store_real_address : 1;
|
||||
unsigned : 3;
|
||||
unsigned branch_addr_ctl : 1;
|
||||
unsigned : 1;
|
||||
unsigned storage_alt_space_ctl : 1;
|
||||
unsigned : 21;
|
||||
unsigned long starting_addr;
|
||||
unsigned long ending_addr;
|
||||
} per_cr_bits;
|
||||
|
||||
typedef struct {
|
||||
unsigned short perc_atmid;
|
||||
unsigned long address;
|
||||
unsigned char access_id;
|
||||
} per_lowcore_words;
|
||||
|
||||
typedef struct {
|
||||
unsigned perc_branching : 1;
|
||||
unsigned perc_instruction_fetch : 1;
|
||||
unsigned perc_storage_alteration : 1;
|
||||
unsigned perc_gpr_alt_unused : 1;
|
||||
unsigned perc_store_real_address : 1;
|
||||
unsigned : 3;
|
||||
unsigned atmid_psw_bit_31 : 1;
|
||||
unsigned atmid_validity_bit : 1;
|
||||
unsigned atmid_psw_bit_32 : 1;
|
||||
unsigned atmid_psw_bit_5 : 1;
|
||||
unsigned atmid_psw_bit_16 : 1;
|
||||
unsigned atmid_psw_bit_17 : 1;
|
||||
unsigned si : 2;
|
||||
unsigned long address;
|
||||
unsigned : 4;
|
||||
unsigned access_id : 4;
|
||||
} per_lowcore_bits;
|
||||
|
||||
typedef struct {
|
||||
union {
|
||||
per_cr_words words;
|
||||
per_cr_bits bits;
|
||||
} control_regs;
|
||||
/*
|
||||
* The single_step and instruction_fetch bits are obsolete,
|
||||
* the kernel always sets them to zero. To enable single
|
||||
* stepping use ptrace(PTRACE_SINGLESTEP) instead.
|
||||
*/
|
||||
unsigned single_step : 1;
|
||||
unsigned instruction_fetch : 1;
|
||||
unsigned : 30;
|
||||
/*
|
||||
* These addresses are copied into cr10 & cr11 if single
|
||||
* stepping is switched off
|
||||
*/
|
||||
unsigned long starting_addr;
|
||||
unsigned long ending_addr;
|
||||
union {
|
||||
per_lowcore_words words;
|
||||
per_lowcore_bits bits;
|
||||
} lowcore;
|
||||
} per_struct;
|
||||
|
||||
typedef struct {
|
||||
unsigned int len;
|
||||
unsigned long kernel_addr;
|
||||
unsigned long process_addr;
|
||||
} ptrace_area;
|
||||
|
||||
/*
|
||||
* S/390 specific non posix ptrace requests. I chose unusual values so
|
||||
* they are unlikely to clash with future ptrace definitions.
|
||||
*/
|
||||
#define PTRACE_PEEKUSR_AREA 0x5000
|
||||
#define PTRACE_POKEUSR_AREA 0x5001
|
||||
#define PTRACE_PEEKTEXT_AREA 0x5002
|
||||
#define PTRACE_PEEKDATA_AREA 0x5003
|
||||
#define PTRACE_POKETEXT_AREA 0x5004
|
||||
#define PTRACE_POKEDATA_AREA 0x5005
|
||||
#define PTRACE_GET_LAST_BREAK 0x5006
|
||||
#define PTRACE_PEEK_SYSTEM_CALL 0x5007
|
||||
#define PTRACE_POKE_SYSTEM_CALL 0x5008
|
||||
#define PTRACE_ENABLE_TE 0x5009
|
||||
#define PTRACE_DISABLE_TE 0x5010
|
||||
#define PTRACE_TE_ABORT_RAND 0x5011
|
||||
|
||||
/*
|
||||
* The numbers chosen here are somewhat arbitrary but absolutely MUST
|
||||
* not overlap with any of the number assigned in <linux/ptrace.h>.
|
||||
*/
|
||||
#define PTRACE_SINGLEBLOCK 12 /* resume execution until next branch */
|
||||
|
||||
/*
|
||||
* PT_PROT definition is loosely based on hppa bsd definition in
|
||||
* gdb/hppab-nat.c
|
||||
*/
|
||||
#define PTRACE_PROT 21
|
||||
|
||||
typedef enum {
|
||||
ptprot_set_access_watchpoint,
|
||||
ptprot_set_write_watchpoint,
|
||||
ptprot_disable_watchpoint
|
||||
} ptprot_flags;
|
||||
|
||||
typedef struct {
|
||||
unsigned long lowaddr;
|
||||
unsigned long hiaddr;
|
||||
ptprot_flags prot;
|
||||
} ptprot_area;
|
||||
|
||||
/* Sequence of bytes for breakpoint illegal instruction. */
|
||||
#define S390_BREAKPOINT {0x0,0x1}
|
||||
#define S390_BREAKPOINT_U16 ((__u16)0x0001)
|
||||
#define S390_SYSCALL_OPCODE ((__u16)0x0a00)
|
||||
#define S390_SYSCALL_SIZE 2
|
||||
|
||||
/*
|
||||
* The user_regs_struct defines the way the user registers are
|
||||
* store on the stack for signal handling.
|
||||
*/
|
||||
struct user_regs_struct {
|
||||
psw_t psw;
|
||||
unsigned long gprs[NUM_GPRS];
|
||||
unsigned int acrs[NUM_ACRS];
|
||||
unsigned long orig_gpr2;
|
||||
s390_fp_regs fp_regs;
|
||||
/*
|
||||
* These per registers are in here so that gdb can modify them
|
||||
* itself as there is no "official" ptrace interface for hardware
|
||||
* watchpoints. This is the way intel does it.
|
||||
*/
|
||||
per_struct per_info;
|
||||
unsigned long ieee_instruction_pointer; /* obsolete, always 0 */
|
||||
};
|
||||
|
||||
#endif /* __ASSEMBLY__ */
|
||||
|
||||
#endif /* _UAPI_S390_PTRACE_H */
|
|
@ -80,9 +80,6 @@ static void jsonw_puts(json_writer_t *self, const char *str)
|
|||
case '"':
|
||||
fputs("\\\"", self->out);
|
||||
break;
|
||||
case '\'':
|
||||
fputs("\\\'", self->out);
|
||||
break;
|
||||
default:
|
||||
putc(*str, self->out);
|
||||
}
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
/fixdep
|
||||
/resolve_btfids
|
||||
/libbpf/
|
||||
/libsubcmd/
|
||||
|
|
|
@ -4969,6 +4969,12 @@ union bpf_attr {
|
|||
* different maps if key/value layout matches across maps.
|
||||
* Every bpf_timer_set_callback() can have different callback_fn.
|
||||
*
|
||||
* *flags* can be one of:
|
||||
*
|
||||
* **BPF_F_TIMER_ABS**
|
||||
* Start the timer in absolute expire value instead of the
|
||||
* default relative one.
|
||||
*
|
||||
* Return
|
||||
* 0 on success.
|
||||
* **-EINVAL** if *timer* was not initialized with bpf_timer_init() earlier
|
||||
|
@ -5325,11 +5331,22 @@ union bpf_attr {
|
|||
* Description
|
||||
* Write *len* bytes from *src* into *dst*, starting from *offset*
|
||||
* into *dst*.
|
||||
* *flags* is currently unused.
|
||||
*
|
||||
* *flags* must be 0 except for skb-type dynptrs.
|
||||
*
|
||||
* For skb-type dynptrs:
|
||||
* * All data slices of the dynptr are automatically
|
||||
* invalidated after **bpf_dynptr_write**\ (). This is
|
||||
* because writing may pull the skb and change the
|
||||
* underlying packet buffer.
|
||||
*
|
||||
* * For *flags*, please see the flags accepted by
|
||||
* **bpf_skb_store_bytes**\ ().
|
||||
* Return
|
||||
* 0 on success, -E2BIG if *offset* + *len* exceeds the length
|
||||
* of *dst*'s data, -EINVAL if *dst* is an invalid dynptr or if *dst*
|
||||
* is a read-only dynptr or if *flags* is not 0.
|
||||
* is a read-only dynptr or if *flags* is not correct. For skb-type dynptrs,
|
||||
* other errors correspond to errors returned by **bpf_skb_store_bytes**\ ().
|
||||
*
|
||||
* void *bpf_dynptr_data(const struct bpf_dynptr *ptr, u32 offset, u32 len)
|
||||
* Description
|
||||
|
@ -5337,6 +5354,9 @@ union bpf_attr {
|
|||
*
|
||||
* *len* must be a statically known value. The returned data slice
|
||||
* is invalidated whenever the dynptr is invalidated.
|
||||
*
|
||||
* skb and xdp type dynptrs may not use bpf_dynptr_data. They should
|
||||
* instead use bpf_dynptr_slice and bpf_dynptr_slice_rdwr.
|
||||
* Return
|
||||
* Pointer to the underlying dynptr data, NULL if the dynptr is
|
||||
* read-only, if the dynptr is invalid, or if the offset and length
|
||||
|
@ -7083,4 +7103,13 @@ struct bpf_core_relo {
|
|||
enum bpf_core_relo_kind kind;
|
||||
};
|
||||
|
||||
/*
|
||||
* Flags to control bpf_timer_start() behaviour.
|
||||
* - BPF_F_TIMER_ABS: Timeout passed is absolute time, by default it is
|
||||
* relative to current time.
|
||||
*/
|
||||
enum {
|
||||
BPF_F_TIMER_ABS = (1ULL << 0),
|
||||
};
|
||||
|
||||
#endif /* _UAPI__LINUX_BPF_H__ */
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
libbpf-y := libbpf.o bpf.o nlattr.o btf.o libbpf_errno.o str_error.o \
|
||||
netlink.o bpf_prog_linfo.o libbpf_probes.o hashmap.o \
|
||||
btf_dump.o ringbuf.o strset.o linker.o gen_loader.o relo_core.o \
|
||||
usdt.o
|
||||
usdt.o zip.o
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
/* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */
|
||||
|
||||
/*
|
||||
* common eBPF ELF operations.
|
||||
* Common BPF ELF operations.
|
||||
*
|
||||
* Copyright (C) 2013-2015 Alexei Starovoitov <ast@kernel.org>
|
||||
* Copyright (C) 2015 Wang Nan <wangnan0@huawei.com>
|
||||
|
@ -386,14 +386,73 @@ LIBBPF_API int bpf_link_get_fd_by_id(__u32 id);
|
|||
LIBBPF_API int bpf_link_get_fd_by_id_opts(__u32 id,
|
||||
const struct bpf_get_fd_by_id_opts *opts);
|
||||
LIBBPF_API int bpf_obj_get_info_by_fd(int bpf_fd, void *info, __u32 *info_len);
|
||||
/* Type-safe variants of bpf_obj_get_info_by_fd(). The callers still needs to
|
||||
* pass info_len, which should normally be
|
||||
* sizeof(struct bpf_{prog,map,btf,link}_info), in order to be compatible with
|
||||
* different libbpf and kernel versions.
|
||||
|
||||
/**
|
||||
* @brief **bpf_prog_get_info_by_fd()** obtains information about the BPF
|
||||
* program corresponding to *prog_fd*.
|
||||
*
|
||||
* Populates up to *info_len* bytes of *info* and updates *info_len* with the
|
||||
* actual number of bytes written to *info*.
|
||||
*
|
||||
* @param prog_fd BPF program file descriptor
|
||||
* @param info pointer to **struct bpf_prog_info** that will be populated with
|
||||
* BPF program information
|
||||
* @param info_len pointer to the size of *info*; on success updated with the
|
||||
* number of bytes written to *info*
|
||||
* @return 0, on success; negative error code, otherwise (errno is also set to
|
||||
* the error code)
|
||||
*/
|
||||
LIBBPF_API int bpf_prog_get_info_by_fd(int prog_fd, struct bpf_prog_info *info, __u32 *info_len);
|
||||
|
||||
/**
|
||||
* @brief **bpf_map_get_info_by_fd()** obtains information about the BPF
|
||||
* map corresponding to *map_fd*.
|
||||
*
|
||||
* Populates up to *info_len* bytes of *info* and updates *info_len* with the
|
||||
* actual number of bytes written to *info*.
|
||||
*
|
||||
* @param map_fd BPF map file descriptor
|
||||
* @param info pointer to **struct bpf_map_info** that will be populated with
|
||||
* BPF map information
|
||||
* @param info_len pointer to the size of *info*; on success updated with the
|
||||
* number of bytes written to *info*
|
||||
* @return 0, on success; negative error code, otherwise (errno is also set to
|
||||
* the error code)
|
||||
*/
|
||||
LIBBPF_API int bpf_map_get_info_by_fd(int map_fd, struct bpf_map_info *info, __u32 *info_len);
|
||||
|
||||
/**
|
||||
* @brief **bpf_btf_get_info_by_fd()** obtains information about the
|
||||
* BTF object corresponding to *btf_fd*.
|
||||
*
|
||||
* Populates up to *info_len* bytes of *info* and updates *info_len* with the
|
||||
* actual number of bytes written to *info*.
|
||||
*
|
||||
* @param btf_fd BTF object file descriptor
|
||||
* @param info pointer to **struct bpf_btf_info** that will be populated with
|
||||
* BTF object information
|
||||
* @param info_len pointer to the size of *info*; on success updated with the
|
||||
* number of bytes written to *info*
|
||||
* @return 0, on success; negative error code, otherwise (errno is also set to
|
||||
* the error code)
|
||||
*/
|
||||
LIBBPF_API int bpf_btf_get_info_by_fd(int btf_fd, struct bpf_btf_info *info, __u32 *info_len);
|
||||
|
||||
/**
|
||||
* @brief **bpf_btf_get_info_by_fd()** obtains information about the BPF
|
||||
* link corresponding to *link_fd*.
|
||||
*
|
||||
* Populates up to *info_len* bytes of *info* and updates *info_len* with the
|
||||
* actual number of bytes written to *info*.
|
||||
*
|
||||
* @param link_fd BPF link file descriptor
|
||||
* @param info pointer to **struct bpf_link_info** that will be populated with
|
||||
* BPF link information
|
||||
* @param info_len pointer to the size of *info*; on success updated with the
|
||||
* number of bytes written to *info*
|
||||
* @return 0, on success; negative error code, otherwise (errno is also set to
|
||||
* the error code)
|
||||
*/
|
||||
LIBBPF_API int bpf_link_get_info_by_fd(int link_fd, struct bpf_link_info *info, __u32 *info_len);
|
||||
|
||||
struct bpf_prog_query_opts {
|
||||
|
|
|
@ -174,8 +174,8 @@ enum libbpf_tristate {
|
|||
|
||||
#define __kconfig __attribute__((section(".kconfig")))
|
||||
#define __ksym __attribute__((section(".ksyms")))
|
||||
#define __kptr_untrusted __attribute__((btf_type_tag("kptr_untrusted")))
|
||||
#define __kptr __attribute__((btf_type_tag("kptr")))
|
||||
#define __kptr_ref __attribute__((btf_type_tag("kptr_ref")))
|
||||
|
||||
#ifndef ___bpf_concat
|
||||
#define ___bpf_concat(a, b) a ## b
|
||||
|
|
|
@ -204,6 +204,7 @@ struct pt_regs___s390 {
|
|||
#define __PT_PARM2_SYSCALL_REG __PT_PARM2_REG
|
||||
#define __PT_PARM3_SYSCALL_REG __PT_PARM3_REG
|
||||
#define __PT_PARM4_SYSCALL_REG __PT_PARM4_REG
|
||||
#define __PT_PARM5_SYSCALL_REG uregs[4]
|
||||
#define __PT_PARM6_SYSCALL_REG uregs[5]
|
||||
#define __PT_PARM7_SYSCALL_REG uregs[6]
|
||||
|
||||
|
@ -415,6 +416,8 @@ struct pt_regs___arm64 {
|
|||
* https://loongson.github.io/LoongArch-Documentation/LoongArch-ELF-ABI-EN.html
|
||||
*/
|
||||
|
||||
/* loongarch provides struct user_pt_regs instead of struct pt_regs to userspace */
|
||||
#define __PT_REGS_CAST(x) ((const struct user_pt_regs *)(x))
|
||||
#define __PT_PARM1_REG regs[4]
|
||||
#define __PT_PARM2_REG regs[5]
|
||||
#define __PT_PARM3_REG regs[6]
|
||||
|
|
|
@ -1000,8 +1000,6 @@ static struct btf *btf_parse_elf(const char *path, struct btf *base_btf,
|
|||
}
|
||||
}
|
||||
|
||||
err = 0;
|
||||
|
||||
if (!btf_data) {
|
||||
pr_warn("failed to find '%s' ELF section in %s\n", BTF_ELF_SEC, path);
|
||||
err = -ENODATA;
|
||||
|
|
|
@ -53,6 +53,7 @@
|
|||
#include "libbpf_internal.h"
|
||||
#include "hashmap.h"
|
||||
#include "bpf_gen_internal.h"
|
||||
#include "zip.h"
|
||||
|
||||
#ifndef BPF_FS_MAGIC
|
||||
#define BPF_FS_MAGIC 0xcafe4a11
|
||||
|
@ -798,7 +799,6 @@ bpf_object__add_programs(struct bpf_object *obj, Elf_Data *sec_data,
|
|||
progs = obj->programs;
|
||||
nr_progs = obj->nr_programs;
|
||||
nr_syms = symbols->d_size / sizeof(Elf64_Sym);
|
||||
sec_off = 0;
|
||||
|
||||
for (i = 0; i < nr_syms; i++) {
|
||||
sym = elf_sym_by_idx(obj, i);
|
||||
|
@ -2615,7 +2615,7 @@ static int bpf_object__init_maps(struct bpf_object *obj,
|
|||
strict = !OPTS_GET(opts, relaxed_maps, false);
|
||||
pin_root_path = OPTS_GET(opts, pin_root_path, NULL);
|
||||
|
||||
err = err ?: bpf_object__init_user_btf_maps(obj, strict, pin_root_path);
|
||||
err = bpf_object__init_user_btf_maps(obj, strict, pin_root_path);
|
||||
err = err ?: bpf_object__init_global_data_maps(obj);
|
||||
err = err ?: bpf_object__init_kconfig_map(obj);
|
||||
err = err ?: bpf_object__init_struct_ops_maps(obj);
|
||||
|
@ -9724,6 +9724,7 @@ struct bpf_link *bpf_program__attach_perf_event_opts(const struct bpf_program *p
|
|||
char errmsg[STRERR_BUFSIZE];
|
||||
struct bpf_link_perf *link;
|
||||
int prog_fd, link_fd = -1, err;
|
||||
bool force_ioctl_attach;
|
||||
|
||||
if (!OPTS_VALID(opts, bpf_perf_event_opts))
|
||||
return libbpf_err_ptr(-EINVAL);
|
||||
|
@ -9747,7 +9748,8 @@ struct bpf_link *bpf_program__attach_perf_event_opts(const struct bpf_program *p
|
|||
link->link.dealloc = &bpf_link_perf_dealloc;
|
||||
link->perf_event_fd = pfd;
|
||||
|
||||
if (kernel_supports(prog->obj, FEAT_PERF_LINK)) {
|
||||
force_ioctl_attach = OPTS_GET(opts, force_ioctl_attach, false);
|
||||
if (kernel_supports(prog->obj, FEAT_PERF_LINK) && !force_ioctl_attach) {
|
||||
DECLARE_LIBBPF_OPTS(bpf_link_create_opts, link_opts,
|
||||
.perf_event.bpf_cookie = OPTS_GET(opts, bpf_cookie, 0));
|
||||
|
||||
|
@ -10106,6 +10108,7 @@ bpf_program__attach_kprobe_opts(const struct bpf_program *prog,
|
|||
const struct bpf_kprobe_opts *opts)
|
||||
{
|
||||
DECLARE_LIBBPF_OPTS(bpf_perf_event_opts, pe_opts);
|
||||
enum probe_attach_mode attach_mode;
|
||||
char errmsg[STRERR_BUFSIZE];
|
||||
char *legacy_probe = NULL;
|
||||
struct bpf_link *link;
|
||||
|
@ -10116,11 +10119,32 @@ bpf_program__attach_kprobe_opts(const struct bpf_program *prog,
|
|||
if (!OPTS_VALID(opts, bpf_kprobe_opts))
|
||||
return libbpf_err_ptr(-EINVAL);
|
||||
|
||||
attach_mode = OPTS_GET(opts, attach_mode, PROBE_ATTACH_MODE_DEFAULT);
|
||||
retprobe = OPTS_GET(opts, retprobe, false);
|
||||
offset = OPTS_GET(opts, offset, 0);
|
||||
pe_opts.bpf_cookie = OPTS_GET(opts, bpf_cookie, 0);
|
||||
|
||||
legacy = determine_kprobe_perf_type() < 0;
|
||||
switch (attach_mode) {
|
||||
case PROBE_ATTACH_MODE_LEGACY:
|
||||
legacy = true;
|
||||
pe_opts.force_ioctl_attach = true;
|
||||
break;
|
||||
case PROBE_ATTACH_MODE_PERF:
|
||||
if (legacy)
|
||||
return libbpf_err_ptr(-ENOTSUP);
|
||||
pe_opts.force_ioctl_attach = true;
|
||||
break;
|
||||
case PROBE_ATTACH_MODE_LINK:
|
||||
if (legacy || !kernel_supports(prog->obj, FEAT_PERF_LINK))
|
||||
return libbpf_err_ptr(-ENOTSUP);
|
||||
break;
|
||||
case PROBE_ATTACH_MODE_DEFAULT:
|
||||
break;
|
||||
default:
|
||||
return libbpf_err_ptr(-EINVAL);
|
||||
}
|
||||
|
||||
if (!legacy) {
|
||||
pfd = perf_event_open_probe(false /* uprobe */, retprobe,
|
||||
func_name, offset,
|
||||
|
@ -10531,32 +10555,19 @@ static Elf_Scn *elf_find_next_scn_by_type(Elf *elf, int sh_type, Elf_Scn *scn)
|
|||
return NULL;
|
||||
}
|
||||
|
||||
/* Find offset of function name in object specified by path. "name" matches
|
||||
* symbol name or name@@LIB for library functions.
|
||||
/* Find offset of function name in the provided ELF object. "binary_path" is
|
||||
* the path to the ELF binary represented by "elf", and only used for error
|
||||
* reporting matters. "name" matches symbol name or name@@LIB for library
|
||||
* functions.
|
||||
*/
|
||||
static long elf_find_func_offset(const char *binary_path, const char *name)
|
||||
static long elf_find_func_offset(Elf *elf, const char *binary_path, const char *name)
|
||||
{
|
||||
int fd, i, sh_types[2] = { SHT_DYNSYM, SHT_SYMTAB };
|
||||
int i, sh_types[2] = { SHT_DYNSYM, SHT_SYMTAB };
|
||||
bool is_shared_lib, is_name_qualified;
|
||||
char errmsg[STRERR_BUFSIZE];
|
||||
long ret = -ENOENT;
|
||||
size_t name_len;
|
||||
GElf_Ehdr ehdr;
|
||||
Elf *elf;
|
||||
|
||||
fd = open(binary_path, O_RDONLY | O_CLOEXEC);
|
||||
if (fd < 0) {
|
||||
ret = -errno;
|
||||
pr_warn("failed to open %s: %s\n", binary_path,
|
||||
libbpf_strerror_r(ret, errmsg, sizeof(errmsg)));
|
||||
return ret;
|
||||
}
|
||||
elf = elf_begin(fd, ELF_C_READ_MMAP, NULL);
|
||||
if (!elf) {
|
||||
pr_warn("elf: could not read elf from %s: %s\n", binary_path, elf_errmsg(-1));
|
||||
close(fd);
|
||||
return -LIBBPF_ERRNO__FORMAT;
|
||||
}
|
||||
if (!gelf_getehdr(elf, &ehdr)) {
|
||||
pr_warn("elf: failed to get ehdr from %s: %s\n", binary_path, elf_errmsg(-1));
|
||||
ret = -LIBBPF_ERRNO__FORMAT;
|
||||
|
@ -10569,7 +10580,7 @@ static long elf_find_func_offset(const char *binary_path, const char *name)
|
|||
/* Does name specify "@@LIB"? */
|
||||
is_name_qualified = strstr(name, "@@") != NULL;
|
||||
|
||||
/* Search SHT_DYNSYM, SHT_SYMTAB for symbol. This search order is used because if
|
||||
/* Search SHT_DYNSYM, SHT_SYMTAB for symbol. This search order is used because if
|
||||
* a binary is stripped, it may only have SHT_DYNSYM, and a fully-statically
|
||||
* linked binary may not have SHT_DYMSYM, so absence of a section should not be
|
||||
* reported as a warning/error.
|
||||
|
@ -10682,11 +10693,101 @@ static long elf_find_func_offset(const char *binary_path, const char *name)
|
|||
}
|
||||
}
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* Find offset of function name in ELF object specified by path. "name" matches
|
||||
* symbol name or name@@LIB for library functions.
|
||||
*/
|
||||
static long elf_find_func_offset_from_file(const char *binary_path, const char *name)
|
||||
{
|
||||
char errmsg[STRERR_BUFSIZE];
|
||||
long ret = -ENOENT;
|
||||
Elf *elf;
|
||||
int fd;
|
||||
|
||||
fd = open(binary_path, O_RDONLY | O_CLOEXEC);
|
||||
if (fd < 0) {
|
||||
ret = -errno;
|
||||
pr_warn("failed to open %s: %s\n", binary_path,
|
||||
libbpf_strerror_r(ret, errmsg, sizeof(errmsg)));
|
||||
return ret;
|
||||
}
|
||||
elf = elf_begin(fd, ELF_C_READ_MMAP, NULL);
|
||||
if (!elf) {
|
||||
pr_warn("elf: could not read elf from %s: %s\n", binary_path, elf_errmsg(-1));
|
||||
close(fd);
|
||||
return -LIBBPF_ERRNO__FORMAT;
|
||||
}
|
||||
|
||||
ret = elf_find_func_offset(elf, binary_path, name);
|
||||
elf_end(elf);
|
||||
close(fd);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* Find offset of function name in archive specified by path. Currently
|
||||
* supported are .zip files that do not compress their contents, as used on
|
||||
* Android in the form of APKs, for example. "file_name" is the name of the ELF
|
||||
* file inside the archive. "func_name" matches symbol name or name@@LIB for
|
||||
* library functions.
|
||||
*
|
||||
* An overview of the APK format specifically provided here:
|
||||
* https://en.wikipedia.org/w/index.php?title=Apk_(file_format)&oldid=1139099120#Package_contents
|
||||
*/
|
||||
static long elf_find_func_offset_from_archive(const char *archive_path, const char *file_name,
|
||||
const char *func_name)
|
||||
{
|
||||
struct zip_archive *archive;
|
||||
struct zip_entry entry;
|
||||
long ret;
|
||||
Elf *elf;
|
||||
|
||||
archive = zip_archive_open(archive_path);
|
||||
if (IS_ERR(archive)) {
|
||||
ret = PTR_ERR(archive);
|
||||
pr_warn("zip: failed to open %s: %ld\n", archive_path, ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = zip_archive_find_entry(archive, file_name, &entry);
|
||||
if (ret) {
|
||||
pr_warn("zip: could not find archive member %s in %s: %ld\n", file_name,
|
||||
archive_path, ret);
|
||||
goto out;
|
||||
}
|
||||
pr_debug("zip: found entry for %s in %s at 0x%lx\n", file_name, archive_path,
|
||||
(unsigned long)entry.data_offset);
|
||||
|
||||
if (entry.compression) {
|
||||
pr_warn("zip: entry %s of %s is compressed and cannot be handled\n", file_name,
|
||||
archive_path);
|
||||
ret = -LIBBPF_ERRNO__FORMAT;
|
||||
goto out;
|
||||
}
|
||||
|
||||
elf = elf_memory((void *)entry.data, entry.data_length);
|
||||
if (!elf) {
|
||||
pr_warn("elf: could not read elf file %s from %s: %s\n", file_name, archive_path,
|
||||
elf_errmsg(-1));
|
||||
ret = -LIBBPF_ERRNO__LIBELF;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = elf_find_func_offset(elf, file_name, func_name);
|
||||
if (ret > 0) {
|
||||
pr_debug("elf: symbol address match for %s of %s in %s: 0x%x + 0x%lx = 0x%lx\n",
|
||||
func_name, file_name, archive_path, entry.data_offset, ret,
|
||||
ret + entry.data_offset);
|
||||
ret += entry.data_offset;
|
||||
}
|
||||
elf_end(elf);
|
||||
|
||||
out:
|
||||
zip_archive_close(archive);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static const char *arch_specific_lib_paths(void)
|
||||
{
|
||||
/*
|
||||
|
@ -10772,9 +10873,11 @@ bpf_program__attach_uprobe_opts(const struct bpf_program *prog, pid_t pid,
|
|||
const char *binary_path, size_t func_offset,
|
||||
const struct bpf_uprobe_opts *opts)
|
||||
{
|
||||
DECLARE_LIBBPF_OPTS(bpf_perf_event_opts, pe_opts);
|
||||
const char *archive_path = NULL, *archive_sep = NULL;
|
||||
char errmsg[STRERR_BUFSIZE], *legacy_probe = NULL;
|
||||
char full_binary_path[PATH_MAX];
|
||||
DECLARE_LIBBPF_OPTS(bpf_perf_event_opts, pe_opts);
|
||||
enum probe_attach_mode attach_mode;
|
||||
char full_path[PATH_MAX];
|
||||
struct bpf_link *link;
|
||||
size_t ref_ctr_off;
|
||||
int pfd, err;
|
||||
|
@ -10784,6 +10887,7 @@ bpf_program__attach_uprobe_opts(const struct bpf_program *prog, pid_t pid,
|
|||
if (!OPTS_VALID(opts, bpf_uprobe_opts))
|
||||
return libbpf_err_ptr(-EINVAL);
|
||||
|
||||
attach_mode = OPTS_GET(opts, attach_mode, PROBE_ATTACH_MODE_DEFAULT);
|
||||
retprobe = OPTS_GET(opts, retprobe, false);
|
||||
ref_ctr_off = OPTS_GET(opts, ref_ctr_offset, 0);
|
||||
pe_opts.bpf_cookie = OPTS_GET(opts, bpf_cookie, 0);
|
||||
|
@ -10791,27 +10895,60 @@ bpf_program__attach_uprobe_opts(const struct bpf_program *prog, pid_t pid,
|
|||
if (!binary_path)
|
||||
return libbpf_err_ptr(-EINVAL);
|
||||
|
||||
if (!strchr(binary_path, '/')) {
|
||||
err = resolve_full_path(binary_path, full_binary_path,
|
||||
sizeof(full_binary_path));
|
||||
/* Check if "binary_path" refers to an archive. */
|
||||
archive_sep = strstr(binary_path, "!/");
|
||||
if (archive_sep) {
|
||||
full_path[0] = '\0';
|
||||
libbpf_strlcpy(full_path, binary_path,
|
||||
min(sizeof(full_path), (size_t)(archive_sep - binary_path + 1)));
|
||||
archive_path = full_path;
|
||||
binary_path = archive_sep + 2;
|
||||
} else if (!strchr(binary_path, '/')) {
|
||||
err = resolve_full_path(binary_path, full_path, sizeof(full_path));
|
||||
if (err) {
|
||||
pr_warn("prog '%s': failed to resolve full path for '%s': %d\n",
|
||||
prog->name, binary_path, err);
|
||||
return libbpf_err_ptr(err);
|
||||
}
|
||||
binary_path = full_binary_path;
|
||||
binary_path = full_path;
|
||||
}
|
||||
func_name = OPTS_GET(opts, func_name, NULL);
|
||||
if (func_name) {
|
||||
long sym_off;
|
||||
|
||||
sym_off = elf_find_func_offset(binary_path, func_name);
|
||||
if (archive_path) {
|
||||
sym_off = elf_find_func_offset_from_archive(archive_path, binary_path,
|
||||
func_name);
|
||||
binary_path = archive_path;
|
||||
} else {
|
||||
sym_off = elf_find_func_offset_from_file(binary_path, func_name);
|
||||
}
|
||||
if (sym_off < 0)
|
||||
return libbpf_err_ptr(sym_off);
|
||||
func_offset += sym_off;
|
||||
}
|
||||
|
||||
legacy = determine_uprobe_perf_type() < 0;
|
||||
switch (attach_mode) {
|
||||
case PROBE_ATTACH_MODE_LEGACY:
|
||||
legacy = true;
|
||||
pe_opts.force_ioctl_attach = true;
|
||||
break;
|
||||
case PROBE_ATTACH_MODE_PERF:
|
||||
if (legacy)
|
||||
return libbpf_err_ptr(-ENOTSUP);
|
||||
pe_opts.force_ioctl_attach = true;
|
||||
break;
|
||||
case PROBE_ATTACH_MODE_LINK:
|
||||
if (legacy || !kernel_supports(prog->obj, FEAT_PERF_LINK))
|
||||
return libbpf_err_ptr(-ENOTSUP);
|
||||
break;
|
||||
case PROBE_ATTACH_MODE_DEFAULT:
|
||||
break;
|
||||
default:
|
||||
return libbpf_err_ptr(-EINVAL);
|
||||
}
|
||||
|
||||
if (!legacy) {
|
||||
pfd = perf_event_open_probe(true /* uprobe */, retprobe, binary_path,
|
||||
func_offset, pid, ref_ctr_off);
|
||||
|
|
|
@ -447,12 +447,15 @@ LIBBPF_API struct bpf_link *
|
|||
bpf_program__attach(const struct bpf_program *prog);
|
||||
|
||||
struct bpf_perf_event_opts {
|
||||
/* size of this struct, for forward/backward compatiblity */
|
||||
/* size of this struct, for forward/backward compatibility */
|
||||
size_t sz;
|
||||
/* custom user-provided value fetchable through bpf_get_attach_cookie() */
|
||||
__u64 bpf_cookie;
|
||||
/* don't use BPF link when attach BPF program */
|
||||
bool force_ioctl_attach;
|
||||
size_t :0;
|
||||
};
|
||||
#define bpf_perf_event_opts__last_field bpf_cookie
|
||||
#define bpf_perf_event_opts__last_field force_ioctl_attach
|
||||
|
||||
LIBBPF_API struct bpf_link *
|
||||
bpf_program__attach_perf_event(const struct bpf_program *prog, int pfd);
|
||||
|
@ -461,8 +464,25 @@ LIBBPF_API struct bpf_link *
|
|||
bpf_program__attach_perf_event_opts(const struct bpf_program *prog, int pfd,
|
||||
const struct bpf_perf_event_opts *opts);
|
||||
|
||||
/**
|
||||
* enum probe_attach_mode - the mode to attach kprobe/uprobe
|
||||
*
|
||||
* force libbpf to attach kprobe/uprobe in specific mode, -ENOTSUP will
|
||||
* be returned if it is not supported by the kernel.
|
||||
*/
|
||||
enum probe_attach_mode {
|
||||
/* attach probe in latest supported mode by kernel */
|
||||
PROBE_ATTACH_MODE_DEFAULT = 0,
|
||||
/* attach probe in legacy mode, using debugfs/tracefs */
|
||||
PROBE_ATTACH_MODE_LEGACY,
|
||||
/* create perf event with perf_event_open() syscall */
|
||||
PROBE_ATTACH_MODE_PERF,
|
||||
/* attach probe with BPF link */
|
||||
PROBE_ATTACH_MODE_LINK,
|
||||
};
|
||||
|
||||
struct bpf_kprobe_opts {
|
||||
/* size of this struct, for forward/backward compatiblity */
|
||||
/* size of this struct, for forward/backward compatibility */
|
||||
size_t sz;
|
||||
/* custom user-provided value fetchable through bpf_get_attach_cookie() */
|
||||
__u64 bpf_cookie;
|
||||
|
@ -470,9 +490,11 @@ struct bpf_kprobe_opts {
|
|||
size_t offset;
|
||||
/* kprobe is return probe */
|
||||
bool retprobe;
|
||||
/* kprobe attach mode */
|
||||
enum probe_attach_mode attach_mode;
|
||||
size_t :0;
|
||||
};
|
||||
#define bpf_kprobe_opts__last_field retprobe
|
||||
#define bpf_kprobe_opts__last_field attach_mode
|
||||
|
||||
LIBBPF_API struct bpf_link *
|
||||
bpf_program__attach_kprobe(const struct bpf_program *prog, bool retprobe,
|
||||
|
@ -506,7 +528,7 @@ bpf_program__attach_kprobe_multi_opts(const struct bpf_program *prog,
|
|||
const struct bpf_kprobe_multi_opts *opts);
|
||||
|
||||
struct bpf_ksyscall_opts {
|
||||
/* size of this struct, for forward/backward compatiblity */
|
||||
/* size of this struct, for forward/backward compatibility */
|
||||
size_t sz;
|
||||
/* custom user-provided value fetchable through bpf_get_attach_cookie() */
|
||||
__u64 bpf_cookie;
|
||||
|
@ -552,7 +574,7 @@ bpf_program__attach_ksyscall(const struct bpf_program *prog,
|
|||
const struct bpf_ksyscall_opts *opts);
|
||||
|
||||
struct bpf_uprobe_opts {
|
||||
/* size of this struct, for forward/backward compatiblity */
|
||||
/* size of this struct, for forward/backward compatibility */
|
||||
size_t sz;
|
||||
/* offset of kernel reference counted USDT semaphore, added in
|
||||
* a6ca88b241d5 ("trace_uprobe: support reference counter in fd-based uprobe")
|
||||
|
@ -570,9 +592,11 @@ struct bpf_uprobe_opts {
|
|||
* binary_path.
|
||||
*/
|
||||
const char *func_name;
|
||||
/* uprobe attach mode */
|
||||
enum probe_attach_mode attach_mode;
|
||||
size_t :0;
|
||||
};
|
||||
#define bpf_uprobe_opts__last_field func_name
|
||||
#define bpf_uprobe_opts__last_field attach_mode
|
||||
|
||||
/**
|
||||
* @brief **bpf_program__attach_uprobe()** attaches a BPF program
|
||||
|
@ -646,7 +670,7 @@ bpf_program__attach_usdt(const struct bpf_program *prog,
|
|||
const struct bpf_usdt_opts *opts);
|
||||
|
||||
struct bpf_tracepoint_opts {
|
||||
/* size of this struct, for forward/backward compatiblity */
|
||||
/* size of this struct, for forward/backward compatibility */
|
||||
size_t sz;
|
||||
/* custom user-provided value fetchable through bpf_get_attach_cookie() */
|
||||
__u64 bpf_cookie;
|
||||
|
@ -1110,7 +1134,7 @@ struct user_ring_buffer;
|
|||
typedef int (*ring_buffer_sample_fn)(void *ctx, void *data, size_t size);
|
||||
|
||||
struct ring_buffer_opts {
|
||||
size_t sz; /* size of this struct, for forward/backward compatiblity */
|
||||
size_t sz; /* size of this struct, for forward/backward compatibility */
|
||||
};
|
||||
|
||||
#define ring_buffer_opts__last_field sz
|
||||
|
@ -1475,7 +1499,7 @@ LIBBPF_API void
|
|||
bpf_object__destroy_subskeleton(struct bpf_object_subskeleton *s);
|
||||
|
||||
struct gen_loader_opts {
|
||||
size_t sz; /* size of this struct, for forward/backward compatiblity */
|
||||
size_t sz; /* size of this struct, for forward/backward compatibility */
|
||||
const char *data;
|
||||
const char *insns;
|
||||
__u32 data_sz;
|
||||
|
@ -1493,13 +1517,13 @@ enum libbpf_tristate {
|
|||
};
|
||||
|
||||
struct bpf_linker_opts {
|
||||
/* size of this struct, for forward/backward compatiblity */
|
||||
/* size of this struct, for forward/backward compatibility */
|
||||
size_t sz;
|
||||
};
|
||||
#define bpf_linker_opts__last_field sz
|
||||
|
||||
struct bpf_linker_file_opts {
|
||||
/* size of this struct, for forward/backward compatiblity */
|
||||
/* size of this struct, for forward/backward compatibility */
|
||||
size_t sz;
|
||||
};
|
||||
#define bpf_linker_file_opts__last_field sz
|
||||
|
@ -1542,7 +1566,7 @@ typedef int (*libbpf_prog_attach_fn_t)(const struct bpf_program *prog, long cook
|
|||
struct bpf_link **link);
|
||||
|
||||
struct libbpf_prog_handler_opts {
|
||||
/* size of this struct, for forward/backward compatiblity */
|
||||
/* size of this struct, for forward/backward compatibility */
|
||||
size_t sz;
|
||||
/* User-provided value that is passed to prog_setup_fn,
|
||||
* prog_prepare_load_fn, and prog_attach_fn callbacks. Allows user to
|
||||
|
|
|
@ -1997,7 +1997,6 @@ add_sym:
|
|||
static int linker_append_elf_relos(struct bpf_linker *linker, struct src_obj *obj)
|
||||
{
|
||||
struct src_sec *src_symtab = &obj->secs[obj->symtab_sec_idx];
|
||||
struct dst_sec *dst_symtab;
|
||||
int i, err;
|
||||
|
||||
for (i = 1; i < obj->sec_cnt; i++) {
|
||||
|
@ -2030,9 +2029,6 @@ static int linker_append_elf_relos(struct bpf_linker *linker, struct src_obj *ob
|
|||
return -1;
|
||||
}
|
||||
|
||||
/* add_dst_sec() above could have invalidated linker->secs */
|
||||
dst_symtab = &linker->secs[linker->symtab_sec_idx];
|
||||
|
||||
/* shdr->sh_link points to SYMTAB */
|
||||
dst_sec->shdr->sh_link = linker->symtab_sec_idx;
|
||||
|
||||
|
@ -2049,16 +2045,13 @@ static int linker_append_elf_relos(struct bpf_linker *linker, struct src_obj *ob
|
|||
dst_rel = dst_sec->raw_data + src_sec->dst_off;
|
||||
n = src_sec->shdr->sh_size / src_sec->shdr->sh_entsize;
|
||||
for (j = 0; j < n; j++, src_rel++, dst_rel++) {
|
||||
size_t src_sym_idx = ELF64_R_SYM(src_rel->r_info);
|
||||
size_t sym_type = ELF64_R_TYPE(src_rel->r_info);
|
||||
Elf64_Sym *src_sym, *dst_sym;
|
||||
size_t dst_sym_idx;
|
||||
size_t src_sym_idx, dst_sym_idx, sym_type;
|
||||
Elf64_Sym *src_sym;
|
||||
|
||||
src_sym_idx = ELF64_R_SYM(src_rel->r_info);
|
||||
src_sym = src_symtab->data->d_buf + sizeof(*src_sym) * src_sym_idx;
|
||||
|
||||
dst_sym_idx = obj->sym_map[src_sym_idx];
|
||||
dst_sym = dst_symtab->raw_data + sizeof(*dst_sym) * dst_sym_idx;
|
||||
dst_rel->r_offset += src_linked_sec->dst_off;
|
||||
sym_type = ELF64_R_TYPE(src_rel->r_info);
|
||||
dst_rel->r_info = ELF64_R_INFO(dst_sym_idx, sym_type);
|
||||
|
|
|
@ -468,8 +468,13 @@ int bpf_xdp_query(int ifindex, int xdp_flags, struct bpf_xdp_query_opts *opts)
|
|||
return 0;
|
||||
|
||||
err = libbpf_netlink_resolve_genl_family_id("netdev", sizeof("netdev"), &id);
|
||||
if (err < 0)
|
||||
if (err < 0) {
|
||||
if (err == -ENOENT) {
|
||||
opts->feature_flags = 0;
|
||||
goto skip_feature_flags;
|
||||
}
|
||||
return libbpf_err(err);
|
||||
}
|
||||
|
||||
memset(&req, 0, sizeof(req));
|
||||
req.nh.nlmsg_len = NLMSG_LENGTH(GENL_HDRLEN);
|
||||
|
@ -489,6 +494,7 @@ int bpf_xdp_query(int ifindex, int xdp_flags, struct bpf_xdp_query_opts *opts)
|
|||
|
||||
opts->feature_flags = md.flags;
|
||||
|
||||
skip_feature_flags:
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -1551,9 +1551,6 @@ int __bpf_core_types_match(const struct btf *local_btf, __u32 local_id, const st
|
|||
if (level <= 0)
|
||||
return -EINVAL;
|
||||
|
||||
local_t = btf_type_by_id(local_btf, local_id);
|
||||
targ_t = btf_type_by_id(targ_btf, targ_id);
|
||||
|
||||
recur:
|
||||
depth--;
|
||||
if (depth < 0)
|
||||
|
|
|
@ -0,0 +1,328 @@
|
|||
// SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
|
||||
/*
|
||||
* Routines for dealing with .zip archives.
|
||||
*
|
||||
* Copyright (c) Meta Platforms, Inc. and affiliates.
|
||||
*/
|
||||
|
||||
#include <errno.h>
|
||||
#include <fcntl.h>
|
||||
#include <stdint.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include <sys/mman.h>
|
||||
#include <unistd.h>
|
||||
|
||||
#include "libbpf_internal.h"
|
||||
#include "zip.h"
|
||||
|
||||
/* Specification of ZIP file format can be found here:
|
||||
* https://pkware.cachefly.net/webdocs/casestudies/APPNOTE.TXT
|
||||
* For a high level overview of the structure of a ZIP file see
|
||||
* sections 4.3.1 - 4.3.6.
|
||||
*
|
||||
* Data structures appearing in ZIP files do not contain any
|
||||
* padding and they might be misaligned. To allow us to safely
|
||||
* operate on pointers to such structures and their members, we
|
||||
* declare the types as packed.
|
||||
*/
|
||||
|
||||
#define END_OF_CD_RECORD_MAGIC 0x06054b50
|
||||
|
||||
/* See section 4.3.16 of the spec. */
|
||||
struct end_of_cd_record {
|
||||
/* Magic value equal to END_OF_CD_RECORD_MAGIC */
|
||||
__u32 magic;
|
||||
|
||||
/* Number of the file containing this structure or 0xFFFF if ZIP64 archive.
|
||||
* Zip archive might span multiple files (disks).
|
||||
*/
|
||||
__u16 this_disk;
|
||||
|
||||
/* Number of the file containing the beginning of the central directory or
|
||||
* 0xFFFF if ZIP64 archive.
|
||||
*/
|
||||
__u16 cd_disk;
|
||||
|
||||
/* Number of central directory records on this disk or 0xFFFF if ZIP64
|
||||
* archive.
|
||||
*/
|
||||
__u16 cd_records;
|
||||
|
||||
/* Number of central directory records on all disks or 0xFFFF if ZIP64
|
||||
* archive.
|
||||
*/
|
||||
__u16 cd_records_total;
|
||||
|
||||
/* Size of the central directory record or 0xFFFFFFFF if ZIP64 archive. */
|
||||
__u32 cd_size;
|
||||
|
||||
/* Offset of the central directory from the beginning of the archive or
|
||||
* 0xFFFFFFFF if ZIP64 archive.
|
||||
*/
|
||||
__u32 cd_offset;
|
||||
|
||||
/* Length of comment data following end of central directory record. */
|
||||
__u16 comment_length;
|
||||
|
||||
/* Up to 64k of arbitrary bytes. */
|
||||
/* uint8_t comment[comment_length] */
|
||||
} __attribute__((packed));
|
||||
|
||||
#define CD_FILE_HEADER_MAGIC 0x02014b50
|
||||
#define FLAG_ENCRYPTED (1 << 0)
|
||||
#define FLAG_HAS_DATA_DESCRIPTOR (1 << 3)
|
||||
|
||||
/* See section 4.3.12 of the spec. */
|
||||
struct cd_file_header {
|
||||
/* Magic value equal to CD_FILE_HEADER_MAGIC. */
|
||||
__u32 magic;
|
||||
__u16 version;
|
||||
/* Minimum zip version needed to extract the file. */
|
||||
__u16 min_version;
|
||||
__u16 flags;
|
||||
__u16 compression;
|
||||
__u16 last_modified_time;
|
||||
__u16 last_modified_date;
|
||||
__u32 crc;
|
||||
__u32 compressed_size;
|
||||
__u32 uncompressed_size;
|
||||
__u16 file_name_length;
|
||||
__u16 extra_field_length;
|
||||
__u16 file_comment_length;
|
||||
/* Number of the disk where the file starts or 0xFFFF if ZIP64 archive. */
|
||||
__u16 disk;
|
||||
__u16 internal_attributes;
|
||||
__u32 external_attributes;
|
||||
/* Offset from the start of the disk containing the local file header to the
|
||||
* start of the local file header.
|
||||
*/
|
||||
__u32 offset;
|
||||
} __attribute__((packed));
|
||||
|
||||
#define LOCAL_FILE_HEADER_MAGIC 0x04034b50
|
||||
|
||||
/* See section 4.3.7 of the spec. */
|
||||
struct local_file_header {
|
||||
/* Magic value equal to LOCAL_FILE_HEADER_MAGIC. */
|
||||
__u32 magic;
|
||||
/* Minimum zip version needed to extract the file. */
|
||||
__u16 min_version;
|
||||
__u16 flags;
|
||||
__u16 compression;
|
||||
__u16 last_modified_time;
|
||||
__u16 last_modified_date;
|
||||
__u32 crc;
|
||||
__u32 compressed_size;
|
||||
__u32 uncompressed_size;
|
||||
__u16 file_name_length;
|
||||
__u16 extra_field_length;
|
||||
} __attribute__((packed));
|
||||
|
||||
struct zip_archive {
|
||||
void *data;
|
||||
__u32 size;
|
||||
__u32 cd_offset;
|
||||
__u32 cd_records;
|
||||
};
|
||||
|
||||
static void *check_access(struct zip_archive *archive, __u32 offset, __u32 size)
|
||||
{
|
||||
if (offset + size > archive->size || offset > offset + size)
|
||||
return NULL;
|
||||
|
||||
return archive->data + offset;
|
||||
}
|
||||
|
||||
/* Returns 0 on success, -EINVAL on error and -ENOTSUP if the eocd indicates the
|
||||
* archive uses features which are not supported.
|
||||
*/
|
||||
static int try_parse_end_of_cd(struct zip_archive *archive, __u32 offset)
|
||||
{
|
||||
__u16 comment_length, cd_records;
|
||||
struct end_of_cd_record *eocd;
|
||||
__u32 cd_offset, cd_size;
|
||||
|
||||
eocd = check_access(archive, offset, sizeof(*eocd));
|
||||
if (!eocd || eocd->magic != END_OF_CD_RECORD_MAGIC)
|
||||
return -EINVAL;
|
||||
|
||||
comment_length = eocd->comment_length;
|
||||
if (offset + sizeof(*eocd) + comment_length != archive->size)
|
||||
return -EINVAL;
|
||||
|
||||
cd_records = eocd->cd_records;
|
||||
if (eocd->this_disk != 0 || eocd->cd_disk != 0 || eocd->cd_records_total != cd_records)
|
||||
/* This is a valid eocd, but we only support single-file non-ZIP64 archives. */
|
||||
return -ENOTSUP;
|
||||
|
||||
cd_offset = eocd->cd_offset;
|
||||
cd_size = eocd->cd_size;
|
||||
if (!check_access(archive, cd_offset, cd_size))
|
||||
return -EINVAL;
|
||||
|
||||
archive->cd_offset = cd_offset;
|
||||
archive->cd_records = cd_records;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int find_cd(struct zip_archive *archive)
|
||||
{
|
||||
int rc = -EINVAL;
|
||||
int64_t limit;
|
||||
__u32 offset;
|
||||
|
||||
if (archive->size <= sizeof(struct end_of_cd_record))
|
||||
return -EINVAL;
|
||||
|
||||
/* Because the end of central directory ends with a variable length array of
|
||||
* up to 0xFFFF bytes we can't know exactly where it starts and need to
|
||||
* search for it at the end of the file, scanning the (limit, offset] range.
|
||||
*/
|
||||
offset = archive->size - sizeof(struct end_of_cd_record);
|
||||
limit = (int64_t)offset - (1 << 16);
|
||||
|
||||
for (; offset >= 0 && offset > limit && rc != 0; offset--) {
|
||||
rc = try_parse_end_of_cd(archive, offset);
|
||||
if (rc == -ENOTSUP)
|
||||
break;
|
||||
}
|
||||
return rc;
|
||||
}
|
||||
|
||||
struct zip_archive *zip_archive_open(const char *path)
|
||||
{
|
||||
struct zip_archive *archive;
|
||||
int err, fd;
|
||||
off_t size;
|
||||
void *data;
|
||||
|
||||
fd = open(path, O_RDONLY | O_CLOEXEC);
|
||||
if (fd < 0)
|
||||
return ERR_PTR(-errno);
|
||||
|
||||
size = lseek(fd, 0, SEEK_END);
|
||||
if (size == (off_t)-1 || size > UINT32_MAX) {
|
||||
close(fd);
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
|
||||
data = mmap(NULL, size, PROT_READ, MAP_PRIVATE, fd, 0);
|
||||
err = -errno;
|
||||
close(fd);
|
||||
|
||||
if (data == MAP_FAILED)
|
||||
return ERR_PTR(err);
|
||||
|
||||
archive = malloc(sizeof(*archive));
|
||||
if (!archive) {
|
||||
munmap(data, size);
|
||||
return ERR_PTR(-ENOMEM);
|
||||
};
|
||||
|
||||
archive->data = data;
|
||||
archive->size = size;
|
||||
|
||||
err = find_cd(archive);
|
||||
if (err) {
|
||||
munmap(data, size);
|
||||
free(archive);
|
||||
return ERR_PTR(err);
|
||||
}
|
||||
|
||||
return archive;
|
||||
}
|
||||
|
||||
void zip_archive_close(struct zip_archive *archive)
|
||||
{
|
||||
munmap(archive->data, archive->size);
|
||||
free(archive);
|
||||
}
|
||||
|
||||
static struct local_file_header *local_file_header_at_offset(struct zip_archive *archive,
|
||||
__u32 offset)
|
||||
{
|
||||
struct local_file_header *lfh;
|
||||
|
||||
lfh = check_access(archive, offset, sizeof(*lfh));
|
||||
if (!lfh || lfh->magic != LOCAL_FILE_HEADER_MAGIC)
|
||||
return NULL;
|
||||
|
||||
return lfh;
|
||||
}
|
||||
|
||||
static int get_entry_at_offset(struct zip_archive *archive, __u32 offset, struct zip_entry *out)
|
||||
{
|
||||
struct local_file_header *lfh;
|
||||
__u32 compressed_size;
|
||||
const char *name;
|
||||
void *data;
|
||||
|
||||
lfh = local_file_header_at_offset(archive, offset);
|
||||
if (!lfh)
|
||||
return -EINVAL;
|
||||
|
||||
offset += sizeof(*lfh);
|
||||
if ((lfh->flags & FLAG_ENCRYPTED) || (lfh->flags & FLAG_HAS_DATA_DESCRIPTOR))
|
||||
return -EINVAL;
|
||||
|
||||
name = check_access(archive, offset, lfh->file_name_length);
|
||||
if (!name)
|
||||
return -EINVAL;
|
||||
|
||||
offset += lfh->file_name_length;
|
||||
if (!check_access(archive, offset, lfh->extra_field_length))
|
||||
return -EINVAL;
|
||||
|
||||
offset += lfh->extra_field_length;
|
||||
compressed_size = lfh->compressed_size;
|
||||
data = check_access(archive, offset, compressed_size);
|
||||
if (!data)
|
||||
return -EINVAL;
|
||||
|
||||
out->compression = lfh->compression;
|
||||
out->name_length = lfh->file_name_length;
|
||||
out->name = name;
|
||||
out->data = data;
|
||||
out->data_length = compressed_size;
|
||||
out->data_offset = offset;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int zip_archive_find_entry(struct zip_archive *archive, const char *file_name,
|
||||
struct zip_entry *out)
|
||||
{
|
||||
size_t file_name_length = strlen(file_name);
|
||||
__u32 i, offset = archive->cd_offset;
|
||||
|
||||
for (i = 0; i < archive->cd_records; ++i) {
|
||||
__u16 cdfh_name_length, cdfh_flags;
|
||||
struct cd_file_header *cdfh;
|
||||
const char *cdfh_name;
|
||||
|
||||
cdfh = check_access(archive, offset, sizeof(*cdfh));
|
||||
if (!cdfh || cdfh->magic != CD_FILE_HEADER_MAGIC)
|
||||
return -EINVAL;
|
||||
|
||||
offset += sizeof(*cdfh);
|
||||
cdfh_name_length = cdfh->file_name_length;
|
||||
cdfh_name = check_access(archive, offset, cdfh_name_length);
|
||||
if (!cdfh_name)
|
||||
return -EINVAL;
|
||||
|
||||
cdfh_flags = cdfh->flags;
|
||||
if ((cdfh_flags & FLAG_ENCRYPTED) == 0 &&
|
||||
(cdfh_flags & FLAG_HAS_DATA_DESCRIPTOR) == 0 &&
|
||||
file_name_length == cdfh_name_length &&
|
||||
memcmp(file_name, archive->data + offset, file_name_length) == 0) {
|
||||
return get_entry_at_offset(archive, cdfh->offset, out);
|
||||
}
|
||||
|
||||
offset += cdfh_name_length;
|
||||
offset += cdfh->extra_field_length;
|
||||
offset += cdfh->file_comment_length;
|
||||
}
|
||||
|
||||
return -ENOENT;
|
||||
}
|
|
@ -0,0 +1,47 @@
|
|||
/* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */
|
||||
|
||||
#ifndef __LIBBPF_ZIP_H
|
||||
#define __LIBBPF_ZIP_H
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
/* Represents an open zip archive.
|
||||
* Only basic ZIP files are supported, in particular the following are not
|
||||
* supported:
|
||||
* - encryption
|
||||
* - streaming
|
||||
* - multi-part ZIP files
|
||||
* - ZIP64
|
||||
*/
|
||||
struct zip_archive;
|
||||
|
||||
/* Carries information on name, compression method, and data corresponding to a
|
||||
* file in a zip archive.
|
||||
*/
|
||||
struct zip_entry {
|
||||
/* Compression method as defined in pkzip spec. 0 means data is uncompressed. */
|
||||
__u16 compression;
|
||||
|
||||
/* Non-null terminated name of the file. */
|
||||
const char *name;
|
||||
/* Length of the file name. */
|
||||
__u16 name_length;
|
||||
|
||||
/* Pointer to the file data. */
|
||||
const void *data;
|
||||
/* Length of the file data. */
|
||||
__u32 data_length;
|
||||
/* Offset of the file data within the archive. */
|
||||
__u32 data_offset;
|
||||
};
|
||||
|
||||
/* Open a zip archive. Returns NULL in case of an error. */
|
||||
struct zip_archive *zip_archive_open(const char *path);
|
||||
|
||||
/* Close a zip archive and release resources. */
|
||||
void zip_archive_close(struct zip_archive *archive);
|
||||
|
||||
/* Look up an entry corresponding to a file in given zip archive. */
|
||||
int zip_archive_find_entry(struct zip_archive *archive, const char *name, struct zip_entry *out);
|
||||
|
||||
#endif
|
|
@ -108,6 +108,8 @@ endif # GCC_TOOLCHAIN_DIR
|
|||
endif # CLANG_CROSS_FLAGS
|
||||
CFLAGS += $(CLANG_CROSS_FLAGS)
|
||||
AFLAGS += $(CLANG_CROSS_FLAGS)
|
||||
else
|
||||
CLANG_CROSS_FLAGS :=
|
||||
endif # CROSS_COMPILE
|
||||
|
||||
# Hack to avoid type-punned warnings on old systems such as RHEL5:
|
||||
|
|
|
@ -4,6 +4,8 @@ bloom_filter_map # failed to find kernel BTF type ID of
|
|||
bpf_cookie # failed to open_and_load program: -524 (trampoline)
|
||||
bpf_loop # attaches to __x64_sys_nanosleep
|
||||
cgrp_local_storage # prog_attach unexpected error: -524 (trampoline)
|
||||
dynptr/test_dynptr_skb_data
|
||||
dynptr/test_skb_readonly
|
||||
fexit_sleep # fexit_skel_load fexit skeleton failed (trampoline)
|
||||
get_stack_raw_tp # user_stack corrupted user stack (no backchain userspace)
|
||||
kprobe_multi_bench_attach # bpf_program__attach_kprobe_multi_opts unexpected error: -95
|
||||
|
|
|
@ -338,7 +338,8 @@ $(RESOLVE_BTFIDS): $(HOST_BPFOBJ) | $(HOST_BUILD_DIR)/resolve_btfids \
|
|||
define get_sys_includes
|
||||
$(shell $(1) $(2) -v -E - </dev/null 2>&1 \
|
||||
| sed -n '/<...> search starts here:/,/End of search list./{ s| \(/.*\)|-idirafter \1|p }') \
|
||||
$(shell $(1) $(2) -dM -E - </dev/null | grep '__riscv_xlen ' | awk '{printf("-D__riscv_xlen=%d -D__BITS_PER_LONG=%d", $$3, $$3)}')
|
||||
$(shell $(1) $(2) -dM -E - </dev/null | grep '__riscv_xlen ' | awk '{printf("-D__riscv_xlen=%d -D__BITS_PER_LONG=%d", $$3, $$3)}') \
|
||||
$(shell $(1) $(2) -dM -E - </dev/null | grep '__loongarch_grlen ' | awk '{printf("-D__BITS_PER_LONG=%d", $$3)}')
|
||||
endef
|
||||
|
||||
# Determine target endianness.
|
||||
|
@ -356,7 +357,7 @@ BPF_CFLAGS = -g -Werror -D__TARGET_ARCH_$(SRCARCH) $(MENDIAN) \
|
|||
-I$(abspath $(OUTPUT)/../usr/include)
|
||||
|
||||
CLANG_CFLAGS = $(CLANG_SYS_INCLUDES) \
|
||||
-Wno-compare-distinct-pointer-types
|
||||
-Wno-compare-distinct-pointer-types -Wuninitialized
|
||||
|
||||
$(OUTPUT)/test_l4lb_noinline.o: BPF_CFLAGS += -fno-inline
|
||||
$(OUTPUT)/test_xdp_noinline.o: BPF_CFLAGS += -fno-inline
|
||||
|
@ -558,7 +559,7 @@ TRUNNER_BPF_PROGS_DIR := progs
|
|||
TRUNNER_EXTRA_SOURCES := test_progs.c cgroup_helpers.c trace_helpers.c \
|
||||
network_helpers.c testing_helpers.c \
|
||||
btf_helpers.c flow_dissector_load.h \
|
||||
cap_helpers.c test_loader.c xsk.c
|
||||
cap_helpers.c test_loader.c xsk.c disasm.c
|
||||
TRUNNER_EXTRA_FILES := $(OUTPUT)/urandom_read $(OUTPUT)/bpf_testmod.ko \
|
||||
$(OUTPUT)/liburandom_read.so \
|
||||
$(OUTPUT)/xdp_synproxy \
|
||||
|
|
|
@ -0,0 +1,38 @@
|
|||
#ifndef __BPF_KFUNCS__
|
||||
#define __BPF_KFUNCS__
|
||||
|
||||
/* Description
|
||||
* Initializes an skb-type dynptr
|
||||
* Returns
|
||||
* Error code
|
||||
*/
|
||||
extern int bpf_dynptr_from_skb(struct __sk_buff *skb, __u64 flags,
|
||||
struct bpf_dynptr *ptr__uninit) __ksym;
|
||||
|
||||
/* Description
|
||||
* Initializes an xdp-type dynptr
|
||||
* Returns
|
||||
* Error code
|
||||
*/
|
||||
extern int bpf_dynptr_from_xdp(struct xdp_md *xdp, __u64 flags,
|
||||
struct bpf_dynptr *ptr__uninit) __ksym;
|
||||
|
||||
/* Description
|
||||
* Obtain a read-only pointer to the dynptr's data
|
||||
* Returns
|
||||
* Either a direct pointer to the dynptr data or a pointer to the user-provided
|
||||
* buffer if unable to obtain a direct pointer
|
||||
*/
|
||||
extern void *bpf_dynptr_slice(const struct bpf_dynptr *ptr, __u32 offset,
|
||||
void *buffer, __u32 buffer__szk) __ksym;
|
||||
|
||||
/* Description
|
||||
* Obtain a read-write pointer to the dynptr's data
|
||||
* Returns
|
||||
* Either a direct pointer to the dynptr data or a pointer to the user-provided
|
||||
* buffer if unable to obtain a direct pointer
|
||||
*/
|
||||
extern void *bpf_dynptr_slice_rdwr(const struct bpf_dynptr *ptr, __u32 offset,
|
||||
void *buffer, __u32 buffer__szk) __ksym;
|
||||
|
||||
#endif
|
|
@ -0,0 +1 @@
|
|||
../../../../kernel/bpf/disasm.c
|
|
@ -0,0 +1 @@
|
|||
../../../../kernel/bpf/disasm.h
|
|
@ -660,16 +660,22 @@ static int do_test_single(struct bpf_align_test *test)
|
|||
* func#0 @0
|
||||
* 0: R1=ctx(off=0,imm=0) R10=fp0
|
||||
* 0: (b7) r3 = 2 ; R3_w=2
|
||||
*
|
||||
* Sometimes it's actually two lines below, e.g. when
|
||||
* searching for "6: R3_w=scalar(umax=255,var_off=(0x0; 0xff))":
|
||||
* from 4 to 6: R0_w=pkt(off=8,r=8,imm=0) R1=ctx(off=0,imm=0) R2_w=pkt(off=0,r=8,imm=0) R3_w=pkt_end(off=0,imm=0) R10=fp0
|
||||
* 6: R0_w=pkt(off=8,r=8,imm=0) R1=ctx(off=0,imm=0) R2_w=pkt(off=0,r=8,imm=0) R3_w=pkt_end(off=0,imm=0) R10=fp0
|
||||
* 6: (71) r3 = *(u8 *)(r2 +0) ; R2_w=pkt(off=0,r=8,imm=0) R3_w=scalar(umax=255,var_off=(0x0; 0xff))
|
||||
*/
|
||||
if (!strstr(line_ptr, m.match)) {
|
||||
while (!strstr(line_ptr, m.match)) {
|
||||
cur_line = -1;
|
||||
line_ptr = strtok(NULL, "\n");
|
||||
sscanf(line_ptr, "%u: ", &cur_line);
|
||||
sscanf(line_ptr ?: "", "%u: ", &cur_line);
|
||||
if (!line_ptr || cur_line != m.line)
|
||||
break;
|
||||
}
|
||||
if (cur_line != m.line || !line_ptr ||
|
||||
!strstr(line_ptr, m.match)) {
|
||||
printf("Failed to find match %u: %s\n",
|
||||
m.line, m.match);
|
||||
if (cur_line != m.line || !line_ptr || !strstr(line_ptr, m.match)) {
|
||||
printf("Failed to find match %u: %s\n", m.line, m.match);
|
||||
ret = 1;
|
||||
printf("%s", bpf_vlog);
|
||||
break;
|
||||
|
|
|
@ -1,5 +1,7 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
#include <test_progs.h>
|
||||
#include "test_attach_kprobe_sleepable.skel.h"
|
||||
#include "test_attach_probe_manual.skel.h"
|
||||
#include "test_attach_probe.skel.h"
|
||||
|
||||
/* this is how USDT semaphore is actually defined, except volatile modifier */
|
||||
|
@ -23,81 +25,54 @@ static noinline void trigger_func3(void)
|
|||
asm volatile ("");
|
||||
}
|
||||
|
||||
/* attach point for ref_ctr */
|
||||
static noinline void trigger_func4(void)
|
||||
{
|
||||
asm volatile ("");
|
||||
}
|
||||
|
||||
static char test_data[] = "test_data";
|
||||
|
||||
void test_attach_probe(void)
|
||||
/* manual attach kprobe/kretprobe/uprobe/uretprobe testings */
|
||||
static void test_attach_probe_manual(enum probe_attach_mode attach_mode)
|
||||
{
|
||||
DECLARE_LIBBPF_OPTS(bpf_uprobe_opts, uprobe_opts);
|
||||
DECLARE_LIBBPF_OPTS(bpf_kprobe_opts, kprobe_opts);
|
||||
struct bpf_link *kprobe_link, *kretprobe_link;
|
||||
struct bpf_link *uprobe_link, *uretprobe_link;
|
||||
struct test_attach_probe* skel;
|
||||
ssize_t uprobe_offset, ref_ctr_offset;
|
||||
struct bpf_link *uprobe_err_link;
|
||||
FILE *devnull;
|
||||
bool legacy;
|
||||
struct test_attach_probe_manual *skel;
|
||||
ssize_t uprobe_offset;
|
||||
|
||||
/* Check if new-style kprobe/uprobe API is supported.
|
||||
* Kernels that support new FD-based kprobe and uprobe BPF attachment
|
||||
* through perf_event_open() syscall expose
|
||||
* /sys/bus/event_source/devices/kprobe/type and
|
||||
* /sys/bus/event_source/devices/uprobe/type files, respectively. They
|
||||
* contain magic numbers that are passed as "type" field of
|
||||
* perf_event_attr. Lack of such file in the system indicates legacy
|
||||
* kernel with old-style kprobe/uprobe attach interface through
|
||||
* creating per-probe event through tracefs. For such cases
|
||||
* ref_ctr_offset feature is not supported, so we don't test it.
|
||||
*/
|
||||
legacy = access("/sys/bus/event_source/devices/kprobe/type", F_OK) != 0;
|
||||
skel = test_attach_probe_manual__open_and_load();
|
||||
if (!ASSERT_OK_PTR(skel, "skel_kprobe_manual_open_and_load"))
|
||||
return;
|
||||
|
||||
uprobe_offset = get_uprobe_offset(&trigger_func);
|
||||
if (!ASSERT_GE(uprobe_offset, 0, "uprobe_offset"))
|
||||
return;
|
||||
|
||||
ref_ctr_offset = get_rel_offset((uintptr_t)&uprobe_ref_ctr);
|
||||
if (!ASSERT_GE(ref_ctr_offset, 0, "ref_ctr_offset"))
|
||||
return;
|
||||
|
||||
skel = test_attach_probe__open();
|
||||
if (!ASSERT_OK_PTR(skel, "skel_open"))
|
||||
return;
|
||||
|
||||
/* sleepable kprobe test case needs flags set before loading */
|
||||
if (!ASSERT_OK(bpf_program__set_flags(skel->progs.handle_kprobe_sleepable,
|
||||
BPF_F_SLEEPABLE), "kprobe_sleepable_flags"))
|
||||
goto cleanup;
|
||||
|
||||
if (!ASSERT_OK(test_attach_probe__load(skel), "skel_load"))
|
||||
goto cleanup;
|
||||
if (!ASSERT_OK_PTR(skel->bss, "check_bss"))
|
||||
goto cleanup;
|
||||
|
||||
/* manual-attach kprobe/kretprobe */
|
||||
kprobe_link = bpf_program__attach_kprobe(skel->progs.handle_kprobe,
|
||||
false /* retprobe */,
|
||||
SYS_NANOSLEEP_KPROBE_NAME);
|
||||
kprobe_opts.attach_mode = attach_mode;
|
||||
kprobe_opts.retprobe = false;
|
||||
kprobe_link = bpf_program__attach_kprobe_opts(skel->progs.handle_kprobe,
|
||||
SYS_NANOSLEEP_KPROBE_NAME,
|
||||
&kprobe_opts);
|
||||
if (!ASSERT_OK_PTR(kprobe_link, "attach_kprobe"))
|
||||
goto cleanup;
|
||||
skel->links.handle_kprobe = kprobe_link;
|
||||
|
||||
kretprobe_link = bpf_program__attach_kprobe(skel->progs.handle_kretprobe,
|
||||
true /* retprobe */,
|
||||
SYS_NANOSLEEP_KPROBE_NAME);
|
||||
kprobe_opts.retprobe = true;
|
||||
kretprobe_link = bpf_program__attach_kprobe_opts(skel->progs.handle_kretprobe,
|
||||
SYS_NANOSLEEP_KPROBE_NAME,
|
||||
&kprobe_opts);
|
||||
if (!ASSERT_OK_PTR(kretprobe_link, "attach_kretprobe"))
|
||||
goto cleanup;
|
||||
skel->links.handle_kretprobe = kretprobe_link;
|
||||
|
||||
/* auto-attachable kprobe and kretprobe */
|
||||
skel->links.handle_kprobe_auto = bpf_program__attach(skel->progs.handle_kprobe_auto);
|
||||
ASSERT_OK_PTR(skel->links.handle_kprobe_auto, "attach_kprobe_auto");
|
||||
|
||||
skel->links.handle_kretprobe_auto = bpf_program__attach(skel->progs.handle_kretprobe_auto);
|
||||
ASSERT_OK_PTR(skel->links.handle_kretprobe_auto, "attach_kretprobe_auto");
|
||||
|
||||
if (!legacy)
|
||||
ASSERT_EQ(uprobe_ref_ctr, 0, "uprobe_ref_ctr_before");
|
||||
|
||||
/* manual-attach uprobe/uretprobe */
|
||||
uprobe_opts.attach_mode = attach_mode;
|
||||
uprobe_opts.ref_ctr_offset = 0;
|
||||
uprobe_opts.retprobe = false;
|
||||
uprobe_opts.ref_ctr_offset = legacy ? 0 : ref_ctr_offset;
|
||||
uprobe_link = bpf_program__attach_uprobe_opts(skel->progs.handle_uprobe,
|
||||
0 /* self pid */,
|
||||
"/proc/self/exe",
|
||||
|
@ -107,12 +82,7 @@ void test_attach_probe(void)
|
|||
goto cleanup;
|
||||
skel->links.handle_uprobe = uprobe_link;
|
||||
|
||||
if (!legacy)
|
||||
ASSERT_GT(uprobe_ref_ctr, 0, "uprobe_ref_ctr_after");
|
||||
|
||||
/* if uprobe uses ref_ctr, uretprobe has to use ref_ctr as well */
|
||||
uprobe_opts.retprobe = true;
|
||||
uprobe_opts.ref_ctr_offset = legacy ? 0 : ref_ctr_offset;
|
||||
uretprobe_link = bpf_program__attach_uprobe_opts(skel->progs.handle_uretprobe,
|
||||
-1 /* any pid */,
|
||||
"/proc/self/exe",
|
||||
|
@ -121,12 +91,7 @@ void test_attach_probe(void)
|
|||
goto cleanup;
|
||||
skel->links.handle_uretprobe = uretprobe_link;
|
||||
|
||||
/* verify auto-attach fails for old-style uprobe definition */
|
||||
uprobe_err_link = bpf_program__attach(skel->progs.handle_uprobe_byname);
|
||||
if (!ASSERT_EQ(libbpf_get_error(uprobe_err_link), -EOPNOTSUPP,
|
||||
"auto-attach should fail for old-style name"))
|
||||
goto cleanup;
|
||||
|
||||
/* attach uprobe by function name manually */
|
||||
uprobe_opts.func_name = "trigger_func2";
|
||||
uprobe_opts.retprobe = false;
|
||||
uprobe_opts.ref_ctr_offset = 0;
|
||||
|
@ -138,11 +103,63 @@ void test_attach_probe(void)
|
|||
if (!ASSERT_OK_PTR(skel->links.handle_uprobe_byname, "attach_uprobe_byname"))
|
||||
goto cleanup;
|
||||
|
||||
/* trigger & validate kprobe && kretprobe */
|
||||
usleep(1);
|
||||
|
||||
/* trigger & validate uprobe & uretprobe */
|
||||
trigger_func();
|
||||
|
||||
/* trigger & validate uprobe attached by name */
|
||||
trigger_func2();
|
||||
|
||||
ASSERT_EQ(skel->bss->kprobe_res, 1, "check_kprobe_res");
|
||||
ASSERT_EQ(skel->bss->kretprobe_res, 2, "check_kretprobe_res");
|
||||
ASSERT_EQ(skel->bss->uprobe_res, 3, "check_uprobe_res");
|
||||
ASSERT_EQ(skel->bss->uretprobe_res, 4, "check_uretprobe_res");
|
||||
ASSERT_EQ(skel->bss->uprobe_byname_res, 5, "check_uprobe_byname_res");
|
||||
|
||||
cleanup:
|
||||
test_attach_probe_manual__destroy(skel);
|
||||
}
|
||||
|
||||
static void test_attach_probe_auto(struct test_attach_probe *skel)
|
||||
{
|
||||
struct bpf_link *uprobe_err_link;
|
||||
|
||||
/* auto-attachable kprobe and kretprobe */
|
||||
skel->links.handle_kprobe_auto = bpf_program__attach(skel->progs.handle_kprobe_auto);
|
||||
ASSERT_OK_PTR(skel->links.handle_kprobe_auto, "attach_kprobe_auto");
|
||||
|
||||
skel->links.handle_kretprobe_auto = bpf_program__attach(skel->progs.handle_kretprobe_auto);
|
||||
ASSERT_OK_PTR(skel->links.handle_kretprobe_auto, "attach_kretprobe_auto");
|
||||
|
||||
/* verify auto-attach fails for old-style uprobe definition */
|
||||
uprobe_err_link = bpf_program__attach(skel->progs.handle_uprobe_byname);
|
||||
if (!ASSERT_EQ(libbpf_get_error(uprobe_err_link), -EOPNOTSUPP,
|
||||
"auto-attach should fail for old-style name"))
|
||||
return;
|
||||
|
||||
/* verify auto-attach works */
|
||||
skel->links.handle_uretprobe_byname =
|
||||
bpf_program__attach(skel->progs.handle_uretprobe_byname);
|
||||
if (!ASSERT_OK_PTR(skel->links.handle_uretprobe_byname, "attach_uretprobe_byname"))
|
||||
goto cleanup;
|
||||
return;
|
||||
|
||||
/* trigger & validate kprobe && kretprobe */
|
||||
usleep(1);
|
||||
|
||||
/* trigger & validate uprobe attached by name */
|
||||
trigger_func2();
|
||||
|
||||
ASSERT_EQ(skel->bss->kprobe2_res, 11, "check_kprobe_auto_res");
|
||||
ASSERT_EQ(skel->bss->kretprobe2_res, 22, "check_kretprobe_auto_res");
|
||||
ASSERT_EQ(skel->bss->uretprobe_byname_res, 6, "check_uretprobe_byname_res");
|
||||
}
|
||||
|
||||
static void test_uprobe_lib(struct test_attach_probe *skel)
|
||||
{
|
||||
DECLARE_LIBBPF_OPTS(bpf_uprobe_opts, uprobe_opts);
|
||||
FILE *devnull;
|
||||
|
||||
/* test attach by name for a library function, using the library
|
||||
* as the binary argument. libc.so.6 will be resolved via dlopen()/dlinfo().
|
||||
|
@ -155,7 +172,7 @@ void test_attach_probe(void)
|
|||
"libc.so.6",
|
||||
0, &uprobe_opts);
|
||||
if (!ASSERT_OK_PTR(skel->links.handle_uprobe_byname2, "attach_uprobe_byname2"))
|
||||
goto cleanup;
|
||||
return;
|
||||
|
||||
uprobe_opts.func_name = "fclose";
|
||||
uprobe_opts.retprobe = true;
|
||||
|
@ -165,62 +182,144 @@ void test_attach_probe(void)
|
|||
"libc.so.6",
|
||||
0, &uprobe_opts);
|
||||
if (!ASSERT_OK_PTR(skel->links.handle_uretprobe_byname2, "attach_uretprobe_byname2"))
|
||||
goto cleanup;
|
||||
|
||||
/* sleepable kprobes should not attach successfully */
|
||||
skel->links.handle_kprobe_sleepable = bpf_program__attach(skel->progs.handle_kprobe_sleepable);
|
||||
if (!ASSERT_ERR_PTR(skel->links.handle_kprobe_sleepable, "attach_kprobe_sleepable"))
|
||||
goto cleanup;
|
||||
|
||||
/* test sleepable uprobe and uretprobe variants */
|
||||
skel->links.handle_uprobe_byname3_sleepable = bpf_program__attach(skel->progs.handle_uprobe_byname3_sleepable);
|
||||
if (!ASSERT_OK_PTR(skel->links.handle_uprobe_byname3_sleepable, "attach_uprobe_byname3_sleepable"))
|
||||
goto cleanup;
|
||||
|
||||
skel->links.handle_uprobe_byname3 = bpf_program__attach(skel->progs.handle_uprobe_byname3);
|
||||
if (!ASSERT_OK_PTR(skel->links.handle_uprobe_byname3, "attach_uprobe_byname3"))
|
||||
goto cleanup;
|
||||
|
||||
skel->links.handle_uretprobe_byname3_sleepable = bpf_program__attach(skel->progs.handle_uretprobe_byname3_sleepable);
|
||||
if (!ASSERT_OK_PTR(skel->links.handle_uretprobe_byname3_sleepable, "attach_uretprobe_byname3_sleepable"))
|
||||
goto cleanup;
|
||||
|
||||
skel->links.handle_uretprobe_byname3 = bpf_program__attach(skel->progs.handle_uretprobe_byname3);
|
||||
if (!ASSERT_OK_PTR(skel->links.handle_uretprobe_byname3, "attach_uretprobe_byname3"))
|
||||
goto cleanup;
|
||||
|
||||
skel->bss->user_ptr = test_data;
|
||||
|
||||
/* trigger & validate kprobe && kretprobe */
|
||||
usleep(1);
|
||||
return;
|
||||
|
||||
/* trigger & validate shared library u[ret]probes attached by name */
|
||||
devnull = fopen("/dev/null", "r");
|
||||
fclose(devnull);
|
||||
|
||||
/* trigger & validate uprobe & uretprobe */
|
||||
trigger_func();
|
||||
ASSERT_EQ(skel->bss->uprobe_byname2_res, 7, "check_uprobe_byname2_res");
|
||||
ASSERT_EQ(skel->bss->uretprobe_byname2_res, 8, "check_uretprobe_byname2_res");
|
||||
}
|
||||
|
||||
/* trigger & validate uprobe attached by name */
|
||||
trigger_func2();
|
||||
static void test_uprobe_ref_ctr(struct test_attach_probe *skel)
|
||||
{
|
||||
DECLARE_LIBBPF_OPTS(bpf_uprobe_opts, uprobe_opts);
|
||||
struct bpf_link *uprobe_link, *uretprobe_link;
|
||||
ssize_t uprobe_offset, ref_ctr_offset;
|
||||
|
||||
uprobe_offset = get_uprobe_offset(&trigger_func4);
|
||||
if (!ASSERT_GE(uprobe_offset, 0, "uprobe_offset_ref_ctr"))
|
||||
return;
|
||||
|
||||
ref_ctr_offset = get_rel_offset((uintptr_t)&uprobe_ref_ctr);
|
||||
if (!ASSERT_GE(ref_ctr_offset, 0, "ref_ctr_offset"))
|
||||
return;
|
||||
|
||||
ASSERT_EQ(uprobe_ref_ctr, 0, "uprobe_ref_ctr_before");
|
||||
|
||||
uprobe_opts.retprobe = false;
|
||||
uprobe_opts.ref_ctr_offset = ref_ctr_offset;
|
||||
uprobe_link = bpf_program__attach_uprobe_opts(skel->progs.handle_uprobe_ref_ctr,
|
||||
0 /* self pid */,
|
||||
"/proc/self/exe",
|
||||
uprobe_offset,
|
||||
&uprobe_opts);
|
||||
if (!ASSERT_OK_PTR(uprobe_link, "attach_uprobe_ref_ctr"))
|
||||
return;
|
||||
skel->links.handle_uprobe_ref_ctr = uprobe_link;
|
||||
|
||||
ASSERT_GT(uprobe_ref_ctr, 0, "uprobe_ref_ctr_after");
|
||||
|
||||
/* if uprobe uses ref_ctr, uretprobe has to use ref_ctr as well */
|
||||
uprobe_opts.retprobe = true;
|
||||
uprobe_opts.ref_ctr_offset = ref_ctr_offset;
|
||||
uretprobe_link = bpf_program__attach_uprobe_opts(skel->progs.handle_uretprobe_ref_ctr,
|
||||
-1 /* any pid */,
|
||||
"/proc/self/exe",
|
||||
uprobe_offset, &uprobe_opts);
|
||||
if (!ASSERT_OK_PTR(uretprobe_link, "attach_uretprobe_ref_ctr"))
|
||||
return;
|
||||
skel->links.handle_uretprobe_ref_ctr = uretprobe_link;
|
||||
}
|
||||
|
||||
static void test_kprobe_sleepable(void)
|
||||
{
|
||||
struct test_attach_kprobe_sleepable *skel;
|
||||
|
||||
skel = test_attach_kprobe_sleepable__open();
|
||||
if (!ASSERT_OK_PTR(skel, "skel_kprobe_sleepable_open"))
|
||||
return;
|
||||
|
||||
/* sleepable kprobe test case needs flags set before loading */
|
||||
if (!ASSERT_OK(bpf_program__set_flags(skel->progs.handle_kprobe_sleepable,
|
||||
BPF_F_SLEEPABLE), "kprobe_sleepable_flags"))
|
||||
goto cleanup;
|
||||
|
||||
if (!ASSERT_OK(test_attach_kprobe_sleepable__load(skel),
|
||||
"skel_kprobe_sleepable_load"))
|
||||
goto cleanup;
|
||||
|
||||
/* sleepable kprobes should not attach successfully */
|
||||
skel->links.handle_kprobe_sleepable = bpf_program__attach(skel->progs.handle_kprobe_sleepable);
|
||||
ASSERT_ERR_PTR(skel->links.handle_kprobe_sleepable, "attach_kprobe_sleepable");
|
||||
|
||||
cleanup:
|
||||
test_attach_kprobe_sleepable__destroy(skel);
|
||||
}
|
||||
|
||||
static void test_uprobe_sleepable(struct test_attach_probe *skel)
|
||||
{
|
||||
/* test sleepable uprobe and uretprobe variants */
|
||||
skel->links.handle_uprobe_byname3_sleepable = bpf_program__attach(skel->progs.handle_uprobe_byname3_sleepable);
|
||||
if (!ASSERT_OK_PTR(skel->links.handle_uprobe_byname3_sleepable, "attach_uprobe_byname3_sleepable"))
|
||||
return;
|
||||
|
||||
skel->links.handle_uprobe_byname3 = bpf_program__attach(skel->progs.handle_uprobe_byname3);
|
||||
if (!ASSERT_OK_PTR(skel->links.handle_uprobe_byname3, "attach_uprobe_byname3"))
|
||||
return;
|
||||
|
||||
skel->links.handle_uretprobe_byname3_sleepable = bpf_program__attach(skel->progs.handle_uretprobe_byname3_sleepable);
|
||||
if (!ASSERT_OK_PTR(skel->links.handle_uretprobe_byname3_sleepable, "attach_uretprobe_byname3_sleepable"))
|
||||
return;
|
||||
|
||||
skel->links.handle_uretprobe_byname3 = bpf_program__attach(skel->progs.handle_uretprobe_byname3);
|
||||
if (!ASSERT_OK_PTR(skel->links.handle_uretprobe_byname3, "attach_uretprobe_byname3"))
|
||||
return;
|
||||
|
||||
skel->bss->user_ptr = test_data;
|
||||
|
||||
/* trigger & validate sleepable uprobe attached by name */
|
||||
trigger_func3();
|
||||
|
||||
ASSERT_EQ(skel->bss->kprobe_res, 1, "check_kprobe_res");
|
||||
ASSERT_EQ(skel->bss->kprobe2_res, 11, "check_kprobe_auto_res");
|
||||
ASSERT_EQ(skel->bss->kretprobe_res, 2, "check_kretprobe_res");
|
||||
ASSERT_EQ(skel->bss->kretprobe2_res, 22, "check_kretprobe_auto_res");
|
||||
ASSERT_EQ(skel->bss->uprobe_res, 3, "check_uprobe_res");
|
||||
ASSERT_EQ(skel->bss->uretprobe_res, 4, "check_uretprobe_res");
|
||||
ASSERT_EQ(skel->bss->uprobe_byname_res, 5, "check_uprobe_byname_res");
|
||||
ASSERT_EQ(skel->bss->uretprobe_byname_res, 6, "check_uretprobe_byname_res");
|
||||
ASSERT_EQ(skel->bss->uprobe_byname2_res, 7, "check_uprobe_byname2_res");
|
||||
ASSERT_EQ(skel->bss->uretprobe_byname2_res, 8, "check_uretprobe_byname2_res");
|
||||
ASSERT_EQ(skel->bss->uprobe_byname3_sleepable_res, 9, "check_uprobe_byname3_sleepable_res");
|
||||
ASSERT_EQ(skel->bss->uprobe_byname3_res, 10, "check_uprobe_byname3_res");
|
||||
ASSERT_EQ(skel->bss->uretprobe_byname3_sleepable_res, 11, "check_uretprobe_byname3_sleepable_res");
|
||||
ASSERT_EQ(skel->bss->uretprobe_byname3_res, 12, "check_uretprobe_byname3_res");
|
||||
}
|
||||
|
||||
void test_attach_probe(void)
|
||||
{
|
||||
struct test_attach_probe *skel;
|
||||
|
||||
skel = test_attach_probe__open();
|
||||
if (!ASSERT_OK_PTR(skel, "skel_open"))
|
||||
return;
|
||||
|
||||
if (!ASSERT_OK(test_attach_probe__load(skel), "skel_load"))
|
||||
goto cleanup;
|
||||
if (!ASSERT_OK_PTR(skel->bss, "check_bss"))
|
||||
goto cleanup;
|
||||
|
||||
if (test__start_subtest("manual-default"))
|
||||
test_attach_probe_manual(PROBE_ATTACH_MODE_DEFAULT);
|
||||
if (test__start_subtest("manual-legacy"))
|
||||
test_attach_probe_manual(PROBE_ATTACH_MODE_LEGACY);
|
||||
if (test__start_subtest("manual-perf"))
|
||||
test_attach_probe_manual(PROBE_ATTACH_MODE_PERF);
|
||||
if (test__start_subtest("manual-link"))
|
||||
test_attach_probe_manual(PROBE_ATTACH_MODE_LINK);
|
||||
|
||||
if (test__start_subtest("auto"))
|
||||
test_attach_probe_auto(skel);
|
||||
if (test__start_subtest("kprobe-sleepable"))
|
||||
test_kprobe_sleepable();
|
||||
if (test__start_subtest("uprobe-lib"))
|
||||
test_uprobe_lib(skel);
|
||||
if (test__start_subtest("uprobe-sleepable"))
|
||||
test_uprobe_sleepable(skel);
|
||||
if (test__start_subtest("uprobe-ref_ctr"))
|
||||
test_uprobe_ref_ctr(skel);
|
||||
|
||||
cleanup:
|
||||
test_attach_probe__destroy(skel);
|
||||
|
|
|
@ -84,6 +84,7 @@ static const char * const success_tests[] = {
|
|||
"test_cgrp_xchg_release",
|
||||
"test_cgrp_get_release",
|
||||
"test_cgrp_get_ancestors",
|
||||
"test_cgrp_from_id",
|
||||
};
|
||||
|
||||
void test_cgrp_kfunc(void)
|
||||
|
|
|
@ -193,7 +193,7 @@ out:
|
|||
cgrp_ls_sleepable__destroy(skel);
|
||||
}
|
||||
|
||||
static void test_no_rcu_lock(__u64 cgroup_id)
|
||||
static void test_yes_rcu_lock(__u64 cgroup_id)
|
||||
{
|
||||
struct cgrp_ls_sleepable *skel;
|
||||
int err;
|
||||
|
@ -204,7 +204,7 @@ static void test_no_rcu_lock(__u64 cgroup_id)
|
|||
|
||||
skel->bss->target_pid = syscall(SYS_gettid);
|
||||
|
||||
bpf_program__set_autoload(skel->progs.no_rcu_lock, true);
|
||||
bpf_program__set_autoload(skel->progs.yes_rcu_lock, true);
|
||||
err = cgrp_ls_sleepable__load(skel);
|
||||
if (!ASSERT_OK(err, "skel_load"))
|
||||
goto out;
|
||||
|
@ -220,7 +220,7 @@ out:
|
|||
cgrp_ls_sleepable__destroy(skel);
|
||||
}
|
||||
|
||||
static void test_rcu_lock(void)
|
||||
static void test_no_rcu_lock(void)
|
||||
{
|
||||
struct cgrp_ls_sleepable *skel;
|
||||
int err;
|
||||
|
@ -229,7 +229,7 @@ static void test_rcu_lock(void)
|
|||
if (!ASSERT_OK_PTR(skel, "skel_open"))
|
||||
return;
|
||||
|
||||
bpf_program__set_autoload(skel->progs.yes_rcu_lock, true);
|
||||
bpf_program__set_autoload(skel->progs.no_rcu_lock, true);
|
||||
err = cgrp_ls_sleepable__load(skel);
|
||||
ASSERT_ERR(err, "skel_load");
|
||||
|
||||
|
@ -256,10 +256,10 @@ void test_cgrp_local_storage(void)
|
|||
test_negative();
|
||||
if (test__start_subtest("cgroup_iter_sleepable"))
|
||||
test_cgroup_iter_sleepable(cgroup_fd, cgroup_id);
|
||||
if (test__start_subtest("yes_rcu_lock"))
|
||||
test_yes_rcu_lock(cgroup_id);
|
||||
if (test__start_subtest("no_rcu_lock"))
|
||||
test_no_rcu_lock(cgroup_id);
|
||||
if (test__start_subtest("rcu_lock"))
|
||||
test_rcu_lock();
|
||||
test_no_rcu_lock();
|
||||
|
||||
close(cgroup_fd);
|
||||
}
|
||||
|
|
|
@ -13,6 +13,7 @@
|
|||
|
||||
#include "progs/test_cls_redirect.h"
|
||||
#include "test_cls_redirect.skel.h"
|
||||
#include "test_cls_redirect_dynptr.skel.h"
|
||||
#include "test_cls_redirect_subprogs.skel.h"
|
||||
|
||||
#define ENCAP_IP INADDR_LOOPBACK
|
||||
|
@ -446,6 +447,28 @@ cleanup:
|
|||
close_fds((int *)conns, sizeof(conns) / sizeof(conns[0][0]));
|
||||
}
|
||||
|
||||
static void test_cls_redirect_dynptr(void)
|
||||
{
|
||||
struct test_cls_redirect_dynptr *skel;
|
||||
int err;
|
||||
|
||||
skel = test_cls_redirect_dynptr__open();
|
||||
if (!ASSERT_OK_PTR(skel, "skel_open"))
|
||||
return;
|
||||
|
||||
skel->rodata->ENCAPSULATION_IP = htonl(ENCAP_IP);
|
||||
skel->rodata->ENCAPSULATION_PORT = htons(ENCAP_PORT);
|
||||
|
||||
err = test_cls_redirect_dynptr__load(skel);
|
||||
if (!ASSERT_OK(err, "skel_load"))
|
||||
goto cleanup;
|
||||
|
||||
test_cls_redirect_common(skel->progs.cls_redirect);
|
||||
|
||||
cleanup:
|
||||
test_cls_redirect_dynptr__destroy(skel);
|
||||
}
|
||||
|
||||
static void test_cls_redirect_inlined(void)
|
||||
{
|
||||
struct test_cls_redirect *skel;
|
||||
|
@ -496,4 +519,6 @@ void test_cls_redirect(void)
|
|||
test_cls_redirect_inlined();
|
||||
if (test__start_subtest("cls_redirect_subprogs"))
|
||||
test_cls_redirect_subprogs();
|
||||
if (test__start_subtest("cls_redirect_dynptr"))
|
||||
test_cls_redirect_dynptr();
|
||||
}
|
||||
|
|
|
@ -0,0 +1,917 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
#include <limits.h>
|
||||
#include <stdio.h>
|
||||
#include <string.h>
|
||||
#include <ctype.h>
|
||||
#include <regex.h>
|
||||
#include <test_progs.h>
|
||||
|
||||
#include "bpf/btf.h"
|
||||
#include "bpf_util.h"
|
||||
#include "linux/filter.h"
|
||||
#include "disasm.h"
|
||||
|
||||
#define MAX_PROG_TEXT_SZ (32 * 1024)
|
||||
|
||||
/* The code in this file serves the sole purpose of executing test cases
|
||||
* specified in the test_cases array. Each test case specifies a program
|
||||
* type, context field offset, and disassembly patterns that correspond
|
||||
* to read and write instructions generated by
|
||||
* verifier.c:convert_ctx_access() for accessing that field.
|
||||
*
|
||||
* For each test case, up to three programs are created:
|
||||
* - One that uses BPF_LDX_MEM to read the context field.
|
||||
* - One that uses BPF_STX_MEM to write to the context field.
|
||||
* - One that uses BPF_ST_MEM to write to the context field.
|
||||
*
|
||||
* The disassembly of each program is then compared with the pattern
|
||||
* specified in the test case.
|
||||
*/
|
||||
struct test_case {
|
||||
char *name;
|
||||
enum bpf_prog_type prog_type;
|
||||
enum bpf_attach_type expected_attach_type;
|
||||
int field_offset;
|
||||
int field_sz;
|
||||
/* Program generated for BPF_ST_MEM uses value 42 by default,
|
||||
* this field allows to specify custom value.
|
||||
*/
|
||||
struct {
|
||||
bool use;
|
||||
int value;
|
||||
} st_value;
|
||||
/* Pattern for BPF_LDX_MEM(field_sz, dst, ctx, field_offset) */
|
||||
char *read;
|
||||
/* Pattern for BPF_STX_MEM(field_sz, ctx, src, field_offset) and
|
||||
* BPF_ST_MEM (field_sz, ctx, src, field_offset)
|
||||
*/
|
||||
char *write;
|
||||
/* Pattern for BPF_ST_MEM(field_sz, ctx, src, field_offset),
|
||||
* takes priority over `write`.
|
||||
*/
|
||||
char *write_st;
|
||||
/* Pattern for BPF_STX_MEM (field_sz, ctx, src, field_offset),
|
||||
* takes priority over `write`.
|
||||
*/
|
||||
char *write_stx;
|
||||
};
|
||||
|
||||
#define N(_prog_type, type, field, name_extra...) \
|
||||
.name = #_prog_type "." #field name_extra, \
|
||||
.prog_type = BPF_PROG_TYPE_##_prog_type, \
|
||||
.field_offset = offsetof(type, field), \
|
||||
.field_sz = sizeof(typeof(((type *)NULL)->field))
|
||||
|
||||
static struct test_case test_cases[] = {
|
||||
/* Sign extension on s390 changes the pattern */
|
||||
#if defined(__x86_64__) || defined(__aarch64__)
|
||||
{
|
||||
N(SCHED_CLS, struct __sk_buff, tstamp),
|
||||
.read = "r11 = *(u8 *)($ctx + sk_buff::__pkt_vlan_present_offset);"
|
||||
"w11 &= 160;"
|
||||
"if w11 != 0xa0 goto pc+2;"
|
||||
"$dst = 0;"
|
||||
"goto pc+1;"
|
||||
"$dst = *(u64 *)($ctx + sk_buff::tstamp);",
|
||||
.write = "r11 = *(u8 *)($ctx + sk_buff::__pkt_vlan_present_offset);"
|
||||
"if w11 & 0x80 goto pc+1;"
|
||||
"goto pc+2;"
|
||||
"w11 &= -33;"
|
||||
"*(u8 *)($ctx + sk_buff::__pkt_vlan_present_offset) = r11;"
|
||||
"*(u64 *)($ctx + sk_buff::tstamp) = $src;",
|
||||
},
|
||||
#endif
|
||||
{
|
||||
N(SCHED_CLS, struct __sk_buff, priority),
|
||||
.read = "$dst = *(u32 *)($ctx + sk_buff::priority);",
|
||||
.write = "*(u32 *)($ctx + sk_buff::priority) = $src;",
|
||||
},
|
||||
{
|
||||
N(SCHED_CLS, struct __sk_buff, mark),
|
||||
.read = "$dst = *(u32 *)($ctx + sk_buff::mark);",
|
||||
.write = "*(u32 *)($ctx + sk_buff::mark) = $src;",
|
||||
},
|
||||
{
|
||||
N(SCHED_CLS, struct __sk_buff, cb[0]),
|
||||
.read = "$dst = *(u32 *)($ctx + $(sk_buff::cb + qdisc_skb_cb::data));",
|
||||
.write = "*(u32 *)($ctx + $(sk_buff::cb + qdisc_skb_cb::data)) = $src;",
|
||||
},
|
||||
{
|
||||
N(SCHED_CLS, struct __sk_buff, tc_classid),
|
||||
.read = "$dst = *(u16 *)($ctx + $(sk_buff::cb + qdisc_skb_cb::tc_classid));",
|
||||
.write = "*(u16 *)($ctx + $(sk_buff::cb + qdisc_skb_cb::tc_classid)) = $src;",
|
||||
},
|
||||
{
|
||||
N(SCHED_CLS, struct __sk_buff, tc_index),
|
||||
.read = "$dst = *(u16 *)($ctx + sk_buff::tc_index);",
|
||||
.write = "*(u16 *)($ctx + sk_buff::tc_index) = $src;",
|
||||
},
|
||||
{
|
||||
N(SCHED_CLS, struct __sk_buff, queue_mapping),
|
||||
.read = "$dst = *(u16 *)($ctx + sk_buff::queue_mapping);",
|
||||
.write_stx = "if $src >= 0xffff goto pc+1;"
|
||||
"*(u16 *)($ctx + sk_buff::queue_mapping) = $src;",
|
||||
.write_st = "*(u16 *)($ctx + sk_buff::queue_mapping) = $src;",
|
||||
},
|
||||
{
|
||||
/* This is a corner case in filter.c:bpf_convert_ctx_access() */
|
||||
N(SCHED_CLS, struct __sk_buff, queue_mapping, ".ushrt_max"),
|
||||
.st_value = { true, USHRT_MAX },
|
||||
.write_st = "goto pc+0;",
|
||||
},
|
||||
{
|
||||
N(CGROUP_SOCK, struct bpf_sock, bound_dev_if),
|
||||
.read = "$dst = *(u32 *)($ctx + sock_common::skc_bound_dev_if);",
|
||||
.write = "*(u32 *)($ctx + sock_common::skc_bound_dev_if) = $src;",
|
||||
},
|
||||
{
|
||||
N(CGROUP_SOCK, struct bpf_sock, mark),
|
||||
.read = "$dst = *(u32 *)($ctx + sock::sk_mark);",
|
||||
.write = "*(u32 *)($ctx + sock::sk_mark) = $src;",
|
||||
},
|
||||
{
|
||||
N(CGROUP_SOCK, struct bpf_sock, priority),
|
||||
.read = "$dst = *(u32 *)($ctx + sock::sk_priority);",
|
||||
.write = "*(u32 *)($ctx + sock::sk_priority) = $src;",
|
||||
},
|
||||
{
|
||||
N(SOCK_OPS, struct bpf_sock_ops, replylong[0]),
|
||||
.read = "$dst = *(u32 *)($ctx + bpf_sock_ops_kern::replylong);",
|
||||
.write = "*(u32 *)($ctx + bpf_sock_ops_kern::replylong) = $src;",
|
||||
},
|
||||
{
|
||||
N(CGROUP_SYSCTL, struct bpf_sysctl, file_pos),
|
||||
#if __BYTE_ORDER == __LITTLE_ENDIAN
|
||||
.read = "$dst = *(u64 *)($ctx + bpf_sysctl_kern::ppos);"
|
||||
"$dst = *(u32 *)($dst +0);",
|
||||
.write = "*(u64 *)($ctx + bpf_sysctl_kern::tmp_reg) = r9;"
|
||||
"r9 = *(u64 *)($ctx + bpf_sysctl_kern::ppos);"
|
||||
"*(u32 *)(r9 +0) = $src;"
|
||||
"r9 = *(u64 *)($ctx + bpf_sysctl_kern::tmp_reg);",
|
||||
#else
|
||||
.read = "$dst = *(u64 *)($ctx + bpf_sysctl_kern::ppos);"
|
||||
"$dst = *(u32 *)($dst +4);",
|
||||
.write = "*(u64 *)($ctx + bpf_sysctl_kern::tmp_reg) = r9;"
|
||||
"r9 = *(u64 *)($ctx + bpf_sysctl_kern::ppos);"
|
||||
"*(u32 *)(r9 +4) = $src;"
|
||||
"r9 = *(u64 *)($ctx + bpf_sysctl_kern::tmp_reg);",
|
||||
#endif
|
||||
},
|
||||
{
|
||||
N(CGROUP_SOCKOPT, struct bpf_sockopt, sk),
|
||||
.read = "$dst = *(u64 *)($ctx + bpf_sockopt_kern::sk);",
|
||||
.expected_attach_type = BPF_CGROUP_GETSOCKOPT,
|
||||
},
|
||||
{
|
||||
N(CGROUP_SOCKOPT, struct bpf_sockopt, level),
|
||||
.read = "$dst = *(u32 *)($ctx + bpf_sockopt_kern::level);",
|
||||
.write = "*(u32 *)($ctx + bpf_sockopt_kern::level) = $src;",
|
||||
.expected_attach_type = BPF_CGROUP_SETSOCKOPT,
|
||||
},
|
||||
{
|
||||
N(CGROUP_SOCKOPT, struct bpf_sockopt, optname),
|
||||
.read = "$dst = *(u32 *)($ctx + bpf_sockopt_kern::optname);",
|
||||
.write = "*(u32 *)($ctx + bpf_sockopt_kern::optname) = $src;",
|
||||
.expected_attach_type = BPF_CGROUP_SETSOCKOPT,
|
||||
},
|
||||
{
|
||||
N(CGROUP_SOCKOPT, struct bpf_sockopt, optlen),
|
||||
.read = "$dst = *(u32 *)($ctx + bpf_sockopt_kern::optlen);",
|
||||
.write = "*(u32 *)($ctx + bpf_sockopt_kern::optlen) = $src;",
|
||||
.expected_attach_type = BPF_CGROUP_SETSOCKOPT,
|
||||
},
|
||||
{
|
||||
N(CGROUP_SOCKOPT, struct bpf_sockopt, retval),
|
||||
.read = "$dst = *(u64 *)($ctx + bpf_sockopt_kern::current_task);"
|
||||
"$dst = *(u64 *)($dst + task_struct::bpf_ctx);"
|
||||
"$dst = *(u32 *)($dst + bpf_cg_run_ctx::retval);",
|
||||
.write = "*(u64 *)($ctx + bpf_sockopt_kern::tmp_reg) = r9;"
|
||||
"r9 = *(u64 *)($ctx + bpf_sockopt_kern::current_task);"
|
||||
"r9 = *(u64 *)(r9 + task_struct::bpf_ctx);"
|
||||
"*(u32 *)(r9 + bpf_cg_run_ctx::retval) = $src;"
|
||||
"r9 = *(u64 *)($ctx + bpf_sockopt_kern::tmp_reg);",
|
||||
.expected_attach_type = BPF_CGROUP_GETSOCKOPT,
|
||||
},
|
||||
{
|
||||
N(CGROUP_SOCKOPT, struct bpf_sockopt, optval),
|
||||
.read = "$dst = *(u64 *)($ctx + bpf_sockopt_kern::optval);",
|
||||
.expected_attach_type = BPF_CGROUP_GETSOCKOPT,
|
||||
},
|
||||
{
|
||||
N(CGROUP_SOCKOPT, struct bpf_sockopt, optval_end),
|
||||
.read = "$dst = *(u64 *)($ctx + bpf_sockopt_kern::optval_end);",
|
||||
.expected_attach_type = BPF_CGROUP_GETSOCKOPT,
|
||||
},
|
||||
};
|
||||
|
||||
#undef N
|
||||
|
||||
static regex_t *ident_regex;
|
||||
static regex_t *field_regex;
|
||||
|
||||
static char *skip_space(char *str)
|
||||
{
|
||||
while (*str && isspace(*str))
|
||||
++str;
|
||||
return str;
|
||||
}
|
||||
|
||||
static char *skip_space_and_semi(char *str)
|
||||
{
|
||||
while (*str && (isspace(*str) || *str == ';'))
|
||||
++str;
|
||||
return str;
|
||||
}
|
||||
|
||||
static char *match_str(char *str, char *prefix)
|
||||
{
|
||||
while (*str && *prefix && *str == *prefix) {
|
||||
++str;
|
||||
++prefix;
|
||||
}
|
||||
if (*prefix)
|
||||
return NULL;
|
||||
return str;
|
||||
}
|
||||
|
||||
static char *match_number(char *str, int num)
|
||||
{
|
||||
char *next;
|
||||
int snum = strtol(str, &next, 10);
|
||||
|
||||
if (next - str == 0 || num != snum)
|
||||
return NULL;
|
||||
|
||||
return next;
|
||||
}
|
||||
|
||||
static int find_field_offset_aux(struct btf *btf, int btf_id, char *field_name, int off)
|
||||
{
|
||||
const struct btf_type *type = btf__type_by_id(btf, btf_id);
|
||||
const struct btf_member *m;
|
||||
__u16 mnum;
|
||||
int i;
|
||||
|
||||
if (!type) {
|
||||
PRINT_FAIL("Can't find btf_type for id %d\n", btf_id);
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (!btf_is_struct(type) && !btf_is_union(type)) {
|
||||
PRINT_FAIL("BTF id %d is not struct or union\n", btf_id);
|
||||
return -1;
|
||||
}
|
||||
|
||||
m = btf_members(type);
|
||||
mnum = btf_vlen(type);
|
||||
|
||||
for (i = 0; i < mnum; ++i, ++m) {
|
||||
const char *mname = btf__name_by_offset(btf, m->name_off);
|
||||
|
||||
if (strcmp(mname, "") == 0) {
|
||||
int msize = find_field_offset_aux(btf, m->type, field_name,
|
||||
off + m->offset);
|
||||
if (msize >= 0)
|
||||
return msize;
|
||||
}
|
||||
|
||||
if (strcmp(mname, field_name))
|
||||
continue;
|
||||
|
||||
return (off + m->offset) / 8;
|
||||
}
|
||||
|
||||
return -1;
|
||||
}
|
||||
|
||||
static int find_field_offset(struct btf *btf, char *pattern, regmatch_t *matches)
|
||||
{
|
||||
int type_sz = matches[1].rm_eo - matches[1].rm_so;
|
||||
int field_sz = matches[2].rm_eo - matches[2].rm_so;
|
||||
char *type = pattern + matches[1].rm_so;
|
||||
char *field = pattern + matches[2].rm_so;
|
||||
char field_str[128] = {};
|
||||
char type_str[128] = {};
|
||||
int btf_id, field_offset;
|
||||
|
||||
if (type_sz >= sizeof(type_str)) {
|
||||
PRINT_FAIL("Malformed pattern: type ident is too long: %d\n", type_sz);
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (field_sz >= sizeof(field_str)) {
|
||||
PRINT_FAIL("Malformed pattern: field ident is too long: %d\n", field_sz);
|
||||
return -1;
|
||||
}
|
||||
|
||||
strncpy(type_str, type, type_sz);
|
||||
strncpy(field_str, field, field_sz);
|
||||
btf_id = btf__find_by_name(btf, type_str);
|
||||
if (btf_id < 0) {
|
||||
PRINT_FAIL("No BTF info for type %s\n", type_str);
|
||||
return -1;
|
||||
}
|
||||
|
||||
field_offset = find_field_offset_aux(btf, btf_id, field_str, 0);
|
||||
if (field_offset < 0) {
|
||||
PRINT_FAIL("No BTF info for field %s::%s\n", type_str, field_str);
|
||||
return -1;
|
||||
}
|
||||
|
||||
return field_offset;
|
||||
}
|
||||
|
||||
static regex_t *compile_regex(char *pat)
|
||||
{
|
||||
regex_t *re;
|
||||
int err;
|
||||
|
||||
re = malloc(sizeof(regex_t));
|
||||
if (!re) {
|
||||
PRINT_FAIL("Can't alloc regex\n");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
err = regcomp(re, pat, REG_EXTENDED);
|
||||
if (err) {
|
||||
char errbuf[512];
|
||||
|
||||
regerror(err, re, errbuf, sizeof(errbuf));
|
||||
PRINT_FAIL("Can't compile regex: %s\n", errbuf);
|
||||
free(re);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
return re;
|
||||
}
|
||||
|
||||
static void free_regex(regex_t *re)
|
||||
{
|
||||
if (!re)
|
||||
return;
|
||||
|
||||
regfree(re);
|
||||
free(re);
|
||||
}
|
||||
|
||||
static u32 max_line_len(char *str)
|
||||
{
|
||||
u32 max_line = 0;
|
||||
char *next = str;
|
||||
|
||||
while (next) {
|
||||
next = strchr(str, '\n');
|
||||
if (next) {
|
||||
max_line = max_t(u32, max_line, (next - str));
|
||||
str = next + 1;
|
||||
} else {
|
||||
max_line = max_t(u32, max_line, strlen(str));
|
||||
}
|
||||
}
|
||||
|
||||
return min(max_line, 60u);
|
||||
}
|
||||
|
||||
/* Print strings `pattern_origin` and `text_origin` side by side,
|
||||
* assume `pattern_pos` and `text_pos` designate location within
|
||||
* corresponding origin string where match diverges.
|
||||
* The output should look like:
|
||||
*
|
||||
* Can't match disassembly(left) with pattern(right):
|
||||
* r2 = *(u64 *)(r1 +0) ; $dst = *(u64 *)($ctx + bpf_sockopt_kern::sk1)
|
||||
* ^ ^
|
||||
* r0 = 0 ;
|
||||
* exit ;
|
||||
*/
|
||||
static void print_match_error(FILE *out,
|
||||
char *pattern_origin, char *text_origin,
|
||||
char *pattern_pos, char *text_pos)
|
||||
{
|
||||
char *pattern = pattern_origin;
|
||||
char *text = text_origin;
|
||||
int middle = max_line_len(text) + 2;
|
||||
|
||||
fprintf(out, "Can't match disassembly(left) with pattern(right):\n");
|
||||
while (*pattern || *text) {
|
||||
int column = 0;
|
||||
int mark1 = -1;
|
||||
int mark2 = -1;
|
||||
|
||||
/* Print one line from text */
|
||||
while (*text && *text != '\n') {
|
||||
if (text == text_pos)
|
||||
mark1 = column;
|
||||
fputc(*text, out);
|
||||
++text;
|
||||
++column;
|
||||
}
|
||||
if (text == text_pos)
|
||||
mark1 = column;
|
||||
|
||||
/* Pad to the middle */
|
||||
while (column < middle) {
|
||||
fputc(' ', out);
|
||||
++column;
|
||||
}
|
||||
fputs("; ", out);
|
||||
column += 3;
|
||||
|
||||
/* Print one line from pattern, pattern lines are terminated by ';' */
|
||||
while (*pattern && *pattern != ';') {
|
||||
if (pattern == pattern_pos)
|
||||
mark2 = column;
|
||||
fputc(*pattern, out);
|
||||
++pattern;
|
||||
++column;
|
||||
}
|
||||
if (pattern == pattern_pos)
|
||||
mark2 = column;
|
||||
|
||||
fputc('\n', out);
|
||||
if (*pattern)
|
||||
++pattern;
|
||||
if (*text)
|
||||
++text;
|
||||
|
||||
/* If pattern and text diverge at this line, print an
|
||||
* additional line with '^' marks, highlighting
|
||||
* positions where match fails.
|
||||
*/
|
||||
if (mark1 > 0 || mark2 > 0) {
|
||||
for (column = 0; column <= max(mark1, mark2); ++column) {
|
||||
if (column == mark1 || column == mark2)
|
||||
fputc('^', out);
|
||||
else
|
||||
fputc(' ', out);
|
||||
}
|
||||
fputc('\n', out);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/* Test if `text` matches `pattern`. Pattern consists of the following elements:
|
||||
*
|
||||
* - Field offset references:
|
||||
*
|
||||
* <type>::<field>
|
||||
*
|
||||
* When such reference is encountered BTF is used to compute numerical
|
||||
* value for the offset of <field> in <type>. The `text` is expected to
|
||||
* contain matching numerical value.
|
||||
*
|
||||
* - Field groups:
|
||||
*
|
||||
* $(<type>::<field> [+ <type>::<field>]*)
|
||||
*
|
||||
* Allows to specify an offset that is a sum of multiple field offsets.
|
||||
* The `text` is expected to contain matching numerical value.
|
||||
*
|
||||
* - Variable references, e.g. `$src`, `$dst`, `$ctx`.
|
||||
* These are substitutions specified in `reg_map` array.
|
||||
* If a substring of pattern is equal to `reg_map[i][0]` the `text` is
|
||||
* expected to contain `reg_map[i][1]` in the matching position.
|
||||
*
|
||||
* - Whitespace is ignored, ';' counts as whitespace for `pattern`.
|
||||
*
|
||||
* - Any other characters, `pattern` and `text` should match one-to-one.
|
||||
*
|
||||
* Example of a pattern:
|
||||
*
|
||||
* __________ fields group ________________
|
||||
* ' '
|
||||
* *(u16 *)($ctx + $(sk_buff::cb + qdisc_skb_cb::tc_classid)) = $src;
|
||||
* ^^^^ '______________________'
|
||||
* variable reference field offset reference
|
||||
*/
|
||||
static bool match_pattern(struct btf *btf, char *pattern, char *text, char *reg_map[][2])
|
||||
{
|
||||
char *pattern_origin = pattern;
|
||||
char *text_origin = text;
|
||||
regmatch_t matches[3];
|
||||
|
||||
_continue:
|
||||
while (*pattern) {
|
||||
if (!*text)
|
||||
goto err;
|
||||
|
||||
/* Skip whitespace */
|
||||
if (isspace(*pattern) || *pattern == ';') {
|
||||
if (!isspace(*text) && text != text_origin && isalnum(text[-1]))
|
||||
goto err;
|
||||
pattern = skip_space_and_semi(pattern);
|
||||
text = skip_space(text);
|
||||
continue;
|
||||
}
|
||||
|
||||
/* Check for variable references */
|
||||
for (int i = 0; reg_map[i][0]; ++i) {
|
||||
char *pattern_next, *text_next;
|
||||
|
||||
pattern_next = match_str(pattern, reg_map[i][0]);
|
||||
if (!pattern_next)
|
||||
continue;
|
||||
|
||||
text_next = match_str(text, reg_map[i][1]);
|
||||
if (!text_next)
|
||||
goto err;
|
||||
|
||||
pattern = pattern_next;
|
||||
text = text_next;
|
||||
goto _continue;
|
||||
}
|
||||
|
||||
/* Match field group:
|
||||
* $(sk_buff::cb + qdisc_skb_cb::tc_classid)
|
||||
*/
|
||||
if (strncmp(pattern, "$(", 2) == 0) {
|
||||
char *group_start = pattern, *text_next;
|
||||
int acc_offset = 0;
|
||||
|
||||
pattern += 2;
|
||||
|
||||
for (;;) {
|
||||
int field_offset;
|
||||
|
||||
pattern = skip_space(pattern);
|
||||
if (!*pattern) {
|
||||
PRINT_FAIL("Unexpected end of pattern\n");
|
||||
goto err;
|
||||
}
|
||||
|
||||
if (*pattern == ')') {
|
||||
++pattern;
|
||||
break;
|
||||
}
|
||||
|
||||
if (*pattern == '+') {
|
||||
++pattern;
|
||||
continue;
|
||||
}
|
||||
|
||||
printf("pattern: %s\n", pattern);
|
||||
if (regexec(field_regex, pattern, 3, matches, 0) != 0) {
|
||||
PRINT_FAIL("Field reference expected\n");
|
||||
goto err;
|
||||
}
|
||||
|
||||
field_offset = find_field_offset(btf, pattern, matches);
|
||||
if (field_offset < 0)
|
||||
goto err;
|
||||
|
||||
pattern += matches[0].rm_eo;
|
||||
acc_offset += field_offset;
|
||||
}
|
||||
|
||||
text_next = match_number(text, acc_offset);
|
||||
if (!text_next) {
|
||||
PRINT_FAIL("No match for group offset %.*s (%d)\n",
|
||||
(int)(pattern - group_start),
|
||||
group_start,
|
||||
acc_offset);
|
||||
goto err;
|
||||
}
|
||||
text = text_next;
|
||||
}
|
||||
|
||||
/* Match field reference:
|
||||
* sk_buff::cb
|
||||
*/
|
||||
if (regexec(field_regex, pattern, 3, matches, 0) == 0) {
|
||||
int field_offset;
|
||||
char *text_next;
|
||||
|
||||
field_offset = find_field_offset(btf, pattern, matches);
|
||||
if (field_offset < 0)
|
||||
goto err;
|
||||
|
||||
text_next = match_number(text, field_offset);
|
||||
if (!text_next) {
|
||||
PRINT_FAIL("No match for field offset %.*s (%d)\n",
|
||||
(int)matches[0].rm_eo, pattern, field_offset);
|
||||
goto err;
|
||||
}
|
||||
|
||||
pattern += matches[0].rm_eo;
|
||||
text = text_next;
|
||||
continue;
|
||||
}
|
||||
|
||||
/* If pattern points to identifier not followed by '::'
|
||||
* skip the identifier to avoid n^2 application of the
|
||||
* field reference rule.
|
||||
*/
|
||||
if (regexec(ident_regex, pattern, 1, matches, 0) == 0) {
|
||||
if (strncmp(pattern, text, matches[0].rm_eo) != 0)
|
||||
goto err;
|
||||
|
||||
pattern += matches[0].rm_eo;
|
||||
text += matches[0].rm_eo;
|
||||
continue;
|
||||
}
|
||||
|
||||
/* Match literally */
|
||||
if (*pattern != *text)
|
||||
goto err;
|
||||
|
||||
++pattern;
|
||||
++text;
|
||||
}
|
||||
|
||||
return true;
|
||||
|
||||
err:
|
||||
test__fail();
|
||||
print_match_error(stdout, pattern_origin, text_origin, pattern, text);
|
||||
return false;
|
||||
}
|
||||
|
||||
/* Request BPF program instructions after all rewrites are applied,
|
||||
* e.g. verifier.c:convert_ctx_access() is done.
|
||||
*/
|
||||
static int get_xlated_program(int fd_prog, struct bpf_insn **buf, __u32 *cnt)
|
||||
{
|
||||
struct bpf_prog_info info = {};
|
||||
__u32 info_len = sizeof(info);
|
||||
__u32 xlated_prog_len;
|
||||
__u32 buf_element_size = sizeof(struct bpf_insn);
|
||||
|
||||
if (bpf_prog_get_info_by_fd(fd_prog, &info, &info_len)) {
|
||||
perror("bpf_prog_get_info_by_fd failed");
|
||||
return -1;
|
||||
}
|
||||
|
||||
xlated_prog_len = info.xlated_prog_len;
|
||||
if (xlated_prog_len % buf_element_size) {
|
||||
printf("Program length %d is not multiple of %d\n",
|
||||
xlated_prog_len, buf_element_size);
|
||||
return -1;
|
||||
}
|
||||
|
||||
*cnt = xlated_prog_len / buf_element_size;
|
||||
*buf = calloc(*cnt, buf_element_size);
|
||||
if (!buf) {
|
||||
perror("can't allocate xlated program buffer");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
bzero(&info, sizeof(info));
|
||||
info.xlated_prog_len = xlated_prog_len;
|
||||
info.xlated_prog_insns = (__u64)(unsigned long)*buf;
|
||||
if (bpf_prog_get_info_by_fd(fd_prog, &info, &info_len)) {
|
||||
perror("second bpf_prog_get_info_by_fd failed");
|
||||
goto out_free_buf;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
out_free_buf:
|
||||
free(*buf);
|
||||
return -1;
|
||||
}
|
||||
|
||||
static void print_insn(void *private_data, const char *fmt, ...)
|
||||
{
|
||||
va_list args;
|
||||
|
||||
va_start(args, fmt);
|
||||
vfprintf((FILE *)private_data, fmt, args);
|
||||
va_end(args);
|
||||
}
|
||||
|
||||
/* Disassemble instructions to a stream */
|
||||
static void print_xlated(FILE *out, struct bpf_insn *insn, __u32 len)
|
||||
{
|
||||
const struct bpf_insn_cbs cbs = {
|
||||
.cb_print = print_insn,
|
||||
.cb_call = NULL,
|
||||
.cb_imm = NULL,
|
||||
.private_data = out,
|
||||
};
|
||||
bool double_insn = false;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < len; i++) {
|
||||
if (double_insn) {
|
||||
double_insn = false;
|
||||
continue;
|
||||
}
|
||||
|
||||
double_insn = insn[i].code == (BPF_LD | BPF_IMM | BPF_DW);
|
||||
print_bpf_insn(&cbs, insn + i, true);
|
||||
}
|
||||
}
|
||||
|
||||
/* We share code with kernel BPF disassembler, it adds '(FF) ' prefix
|
||||
* for each instruction (FF stands for instruction `code` byte).
|
||||
* This function removes the prefix inplace for each line in `str`.
|
||||
*/
|
||||
static void remove_insn_prefix(char *str, int size)
|
||||
{
|
||||
const int prefix_size = 5;
|
||||
|
||||
int write_pos = 0, read_pos = prefix_size;
|
||||
int len = strlen(str);
|
||||
char c;
|
||||
|
||||
size = min(size, len);
|
||||
|
||||
while (read_pos < size) {
|
||||
c = str[read_pos++];
|
||||
if (c == 0)
|
||||
break;
|
||||
str[write_pos++] = c;
|
||||
if (c == '\n')
|
||||
read_pos += prefix_size;
|
||||
}
|
||||
str[write_pos] = 0;
|
||||
}
|
||||
|
||||
struct prog_info {
|
||||
char *prog_kind;
|
||||
enum bpf_prog_type prog_type;
|
||||
enum bpf_attach_type expected_attach_type;
|
||||
struct bpf_insn *prog;
|
||||
u32 prog_len;
|
||||
};
|
||||
|
||||
static void match_program(struct btf *btf,
|
||||
struct prog_info *pinfo,
|
||||
char *pattern,
|
||||
char *reg_map[][2],
|
||||
bool skip_first_insn)
|
||||
{
|
||||
struct bpf_insn *buf = NULL;
|
||||
int err = 0, prog_fd = 0;
|
||||
FILE *prog_out = NULL;
|
||||
char *text = NULL;
|
||||
__u32 cnt = 0;
|
||||
|
||||
text = calloc(MAX_PROG_TEXT_SZ, 1);
|
||||
if (!text) {
|
||||
PRINT_FAIL("Can't allocate %d bytes\n", MAX_PROG_TEXT_SZ);
|
||||
goto out;
|
||||
}
|
||||
|
||||
// TODO: log level
|
||||
LIBBPF_OPTS(bpf_prog_load_opts, opts);
|
||||
opts.log_buf = text;
|
||||
opts.log_size = MAX_PROG_TEXT_SZ;
|
||||
opts.log_level = 1 | 2 | 4;
|
||||
opts.expected_attach_type = pinfo->expected_attach_type;
|
||||
|
||||
prog_fd = bpf_prog_load(pinfo->prog_type, NULL, "GPL",
|
||||
pinfo->prog, pinfo->prog_len, &opts);
|
||||
if (prog_fd < 0) {
|
||||
PRINT_FAIL("Can't load program, errno %d (%s), verifier log:\n%s\n",
|
||||
errno, strerror(errno), text);
|
||||
goto out;
|
||||
}
|
||||
|
||||
memset(text, 0, MAX_PROG_TEXT_SZ);
|
||||
|
||||
err = get_xlated_program(prog_fd, &buf, &cnt);
|
||||
if (err) {
|
||||
PRINT_FAIL("Can't load back BPF program\n");
|
||||
goto out;
|
||||
}
|
||||
|
||||
prog_out = fmemopen(text, MAX_PROG_TEXT_SZ - 1, "w");
|
||||
if (!prog_out) {
|
||||
PRINT_FAIL("Can't open memory stream\n");
|
||||
goto out;
|
||||
}
|
||||
if (skip_first_insn)
|
||||
print_xlated(prog_out, buf + 1, cnt - 1);
|
||||
else
|
||||
print_xlated(prog_out, buf, cnt);
|
||||
fclose(prog_out);
|
||||
remove_insn_prefix(text, MAX_PROG_TEXT_SZ);
|
||||
|
||||
ASSERT_TRUE(match_pattern(btf, pattern, text, reg_map),
|
||||
pinfo->prog_kind);
|
||||
|
||||
out:
|
||||
if (prog_fd)
|
||||
close(prog_fd);
|
||||
free(buf);
|
||||
free(text);
|
||||
}
|
||||
|
||||
static void run_one_testcase(struct btf *btf, struct test_case *test)
|
||||
{
|
||||
struct prog_info pinfo = {};
|
||||
int bpf_sz;
|
||||
|
||||
if (!test__start_subtest(test->name))
|
||||
return;
|
||||
|
||||
switch (test->field_sz) {
|
||||
case 8:
|
||||
bpf_sz = BPF_DW;
|
||||
break;
|
||||
case 4:
|
||||
bpf_sz = BPF_W;
|
||||
break;
|
||||
case 2:
|
||||
bpf_sz = BPF_H;
|
||||
break;
|
||||
case 1:
|
||||
bpf_sz = BPF_B;
|
||||
break;
|
||||
default:
|
||||
PRINT_FAIL("Unexpected field size: %d, want 8,4,2 or 1\n", test->field_sz);
|
||||
return;
|
||||
}
|
||||
|
||||
pinfo.prog_type = test->prog_type;
|
||||
pinfo.expected_attach_type = test->expected_attach_type;
|
||||
|
||||
if (test->read) {
|
||||
struct bpf_insn ldx_prog[] = {
|
||||
BPF_LDX_MEM(bpf_sz, BPF_REG_2, BPF_REG_1, test->field_offset),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
};
|
||||
char *reg_map[][2] = {
|
||||
{ "$ctx", "r1" },
|
||||
{ "$dst", "r2" },
|
||||
{}
|
||||
};
|
||||
|
||||
pinfo.prog_kind = "LDX";
|
||||
pinfo.prog = ldx_prog;
|
||||
pinfo.prog_len = ARRAY_SIZE(ldx_prog);
|
||||
match_program(btf, &pinfo, test->read, reg_map, false);
|
||||
}
|
||||
|
||||
if (test->write || test->write_st || test->write_stx) {
|
||||
struct bpf_insn stx_prog[] = {
|
||||
BPF_MOV64_IMM(BPF_REG_2, 0),
|
||||
BPF_STX_MEM(bpf_sz, BPF_REG_1, BPF_REG_2, test->field_offset),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
};
|
||||
char *stx_reg_map[][2] = {
|
||||
{ "$ctx", "r1" },
|
||||
{ "$src", "r2" },
|
||||
{}
|
||||
};
|
||||
struct bpf_insn st_prog[] = {
|
||||
BPF_ST_MEM(bpf_sz, BPF_REG_1, test->field_offset,
|
||||
test->st_value.use ? test->st_value.value : 42),
|
||||
BPF_MOV64_IMM(BPF_REG_0, 0),
|
||||
BPF_EXIT_INSN(),
|
||||
};
|
||||
char *st_reg_map[][2] = {
|
||||
{ "$ctx", "r1" },
|
||||
{ "$src", "42" },
|
||||
{}
|
||||
};
|
||||
|
||||
if (test->write || test->write_stx) {
|
||||
char *pattern = test->write_stx ? test->write_stx : test->write;
|
||||
|
||||
pinfo.prog_kind = "STX";
|
||||
pinfo.prog = stx_prog;
|
||||
pinfo.prog_len = ARRAY_SIZE(stx_prog);
|
||||
match_program(btf, &pinfo, pattern, stx_reg_map, true);
|
||||
}
|
||||
|
||||
if (test->write || test->write_st) {
|
||||
char *pattern = test->write_st ? test->write_st : test->write;
|
||||
|
||||
pinfo.prog_kind = "ST";
|
||||
pinfo.prog = st_prog;
|
||||
pinfo.prog_len = ARRAY_SIZE(st_prog);
|
||||
match_program(btf, &pinfo, pattern, st_reg_map, false);
|
||||
}
|
||||
}
|
||||
|
||||
test__end_subtest();
|
||||
}
|
||||
|
||||
void test_ctx_rewrite(void)
|
||||
{
|
||||
struct btf *btf;
|
||||
int i;
|
||||
|
||||
field_regex = compile_regex("^([[:alpha:]_][[:alnum:]_]+)::([[:alpha:]_][[:alnum:]_]+)");
|
||||
ident_regex = compile_regex("^[[:alpha:]_][[:alnum:]_]+");
|
||||
if (!field_regex || !ident_regex)
|
||||
return;
|
||||
|
||||
btf = btf__load_vmlinux_btf();
|
||||
if (!btf) {
|
||||
PRINT_FAIL("Can't load vmlinux BTF, errno %d (%s)\n", errno, strerror(errno));
|
||||
goto out;
|
||||
}
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(test_cases); ++i)
|
||||
run_one_testcase(btf, &test_cases[i]);
|
||||
|
||||
out:
|
||||
btf__free(btf);
|
||||
free_regex(field_regex);
|
||||
free_regex(ident_regex);
|
||||
}
|
|
@ -10,14 +10,6 @@
|
|||
#include "network_helpers.h"
|
||||
#include "decap_sanity.skel.h"
|
||||
|
||||
#define SYS(fmt, ...) \
|
||||
({ \
|
||||
char cmd[1024]; \
|
||||
snprintf(cmd, sizeof(cmd), fmt, ##__VA_ARGS__); \
|
||||
if (!ASSERT_OK(system(cmd), cmd)) \
|
||||
goto fail; \
|
||||
})
|
||||
|
||||
#define NS_TEST "decap_sanity_ns"
|
||||
#define IPV6_IFACE_ADDR "face::1"
|
||||
#define UDP_TEST_PORT 7777
|
||||
|
@ -37,9 +29,9 @@ void test_decap_sanity(void)
|
|||
if (!ASSERT_OK_PTR(skel, "skel open_and_load"))
|
||||
return;
|
||||
|
||||
SYS("ip netns add %s", NS_TEST);
|
||||
SYS("ip -net %s -6 addr add %s/128 dev lo nodad", NS_TEST, IPV6_IFACE_ADDR);
|
||||
SYS("ip -net %s link set dev lo up", NS_TEST);
|
||||
SYS(fail, "ip netns add %s", NS_TEST);
|
||||
SYS(fail, "ip -net %s -6 addr add %s/128 dev lo nodad", NS_TEST, IPV6_IFACE_ADDR);
|
||||
SYS(fail, "ip -net %s link set dev lo up", NS_TEST);
|
||||
|
||||
nstoken = open_netns(NS_TEST);
|
||||
if (!ASSERT_OK_PTR(nstoken, "open_netns"))
|
||||
|
@ -80,6 +72,6 @@ fail:
|
|||
bpf_tc_hook_destroy(&qdisc_hook);
|
||||
close_netns(nstoken);
|
||||
}
|
||||
system("ip netns del " NS_TEST " &> /dev/null");
|
||||
SYS_NOFAIL("ip netns del " NS_TEST " &> /dev/null");
|
||||
decap_sanity__destroy(skel);
|
||||
}
|
||||
|
|
|
@ -2,20 +2,32 @@
|
|||
/* Copyright (c) 2022 Facebook */
|
||||
|
||||
#include <test_progs.h>
|
||||
#include <network_helpers.h>
|
||||
#include "dynptr_fail.skel.h"
|
||||
#include "dynptr_success.skel.h"
|
||||
|
||||
static const char * const success_tests[] = {
|
||||
"test_read_write",
|
||||
"test_data_slice",
|
||||
"test_ringbuf",
|
||||
enum test_setup_type {
|
||||
SETUP_SYSCALL_SLEEP,
|
||||
SETUP_SKB_PROG,
|
||||
};
|
||||
|
||||
static void verify_success(const char *prog_name)
|
||||
static struct {
|
||||
const char *prog_name;
|
||||
enum test_setup_type type;
|
||||
} success_tests[] = {
|
||||
{"test_read_write", SETUP_SYSCALL_SLEEP},
|
||||
{"test_dynptr_data", SETUP_SYSCALL_SLEEP},
|
||||
{"test_ringbuf", SETUP_SYSCALL_SLEEP},
|
||||
{"test_skb_readonly", SETUP_SKB_PROG},
|
||||
{"test_dynptr_skb_data", SETUP_SKB_PROG},
|
||||
};
|
||||
|
||||
static void verify_success(const char *prog_name, enum test_setup_type setup_type)
|
||||
{
|
||||
struct dynptr_success *skel;
|
||||
struct bpf_program *prog;
|
||||
struct bpf_link *link;
|
||||
int err;
|
||||
|
||||
skel = dynptr_success__open();
|
||||
if (!ASSERT_OK_PTR(skel, "dynptr_success__open"))
|
||||
|
@ -23,24 +35,54 @@ static void verify_success(const char *prog_name)
|
|||
|
||||
skel->bss->pid = getpid();
|
||||
|
||||
dynptr_success__load(skel);
|
||||
if (!ASSERT_OK_PTR(skel, "dynptr_success__load"))
|
||||
goto cleanup;
|
||||
|
||||
prog = bpf_object__find_program_by_name(skel->obj, prog_name);
|
||||
if (!ASSERT_OK_PTR(prog, "bpf_object__find_program_by_name"))
|
||||
goto cleanup;
|
||||
|
||||
link = bpf_program__attach(prog);
|
||||
if (!ASSERT_OK_PTR(link, "bpf_program__attach"))
|
||||
bpf_program__set_autoload(prog, true);
|
||||
|
||||
err = dynptr_success__load(skel);
|
||||
if (!ASSERT_OK(err, "dynptr_success__load"))
|
||||
goto cleanup;
|
||||
|
||||
usleep(1);
|
||||
switch (setup_type) {
|
||||
case SETUP_SYSCALL_SLEEP:
|
||||
link = bpf_program__attach(prog);
|
||||
if (!ASSERT_OK_PTR(link, "bpf_program__attach"))
|
||||
goto cleanup;
|
||||
|
||||
usleep(1);
|
||||
|
||||
bpf_link__destroy(link);
|
||||
break;
|
||||
case SETUP_SKB_PROG:
|
||||
{
|
||||
int prog_fd;
|
||||
char buf[64];
|
||||
|
||||
LIBBPF_OPTS(bpf_test_run_opts, topts,
|
||||
.data_in = &pkt_v4,
|
||||
.data_size_in = sizeof(pkt_v4),
|
||||
.data_out = buf,
|
||||
.data_size_out = sizeof(buf),
|
||||
.repeat = 1,
|
||||
);
|
||||
|
||||
prog_fd = bpf_program__fd(prog);
|
||||
if (!ASSERT_GE(prog_fd, 0, "prog_fd"))
|
||||
goto cleanup;
|
||||
|
||||
err = bpf_prog_test_run_opts(prog_fd, &topts);
|
||||
|
||||
if (!ASSERT_OK(err, "test_run"))
|
||||
goto cleanup;
|
||||
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
ASSERT_EQ(skel->bss->err, 0, "err");
|
||||
|
||||
bpf_link__destroy(link);
|
||||
|
||||
cleanup:
|
||||
dynptr_success__destroy(skel);
|
||||
}
|
||||
|
@ -50,10 +92,10 @@ void test_dynptr(void)
|
|||
int i;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(success_tests); i++) {
|
||||
if (!test__start_subtest(success_tests[i]))
|
||||
if (!test__start_subtest(success_tests[i].prog_name))
|
||||
continue;
|
||||
|
||||
verify_success(success_tests[i]);
|
||||
verify_success(success_tests[i].prog_name, success_tests[i].type);
|
||||
}
|
||||
|
||||
RUN_TESTS(dynptr_fail);
|
||||
|
|
|
@ -4,11 +4,6 @@
|
|||
#include <net/if.h>
|
||||
#include "empty_skb.skel.h"
|
||||
|
||||
#define SYS(cmd) ({ \
|
||||
if (!ASSERT_OK(system(cmd), (cmd))) \
|
||||
goto out; \
|
||||
})
|
||||
|
||||
void test_empty_skb(void)
|
||||
{
|
||||
LIBBPF_OPTS(bpf_test_run_opts, tattr);
|
||||
|
@ -93,18 +88,18 @@ void test_empty_skb(void)
|
|||
},
|
||||
};
|
||||
|
||||
SYS("ip netns add empty_skb");
|
||||
SYS(out, "ip netns add empty_skb");
|
||||
tok = open_netns("empty_skb");
|
||||
SYS("ip link add veth0 type veth peer veth1");
|
||||
SYS("ip link set dev veth0 up");
|
||||
SYS("ip link set dev veth1 up");
|
||||
SYS("ip addr add 10.0.0.1/8 dev veth0");
|
||||
SYS("ip addr add 10.0.0.2/8 dev veth1");
|
||||
SYS(out, "ip link add veth0 type veth peer veth1");
|
||||
SYS(out, "ip link set dev veth0 up");
|
||||
SYS(out, "ip link set dev veth1 up");
|
||||
SYS(out, "ip addr add 10.0.0.1/8 dev veth0");
|
||||
SYS(out, "ip addr add 10.0.0.2/8 dev veth1");
|
||||
veth_ifindex = if_nametoindex("veth0");
|
||||
|
||||
SYS("ip link add ipip0 type ipip local 10.0.0.1 remote 10.0.0.2");
|
||||
SYS("ip link set ipip0 up");
|
||||
SYS("ip addr add 192.168.1.1/16 dev ipip0");
|
||||
SYS(out, "ip link add ipip0 type ipip local 10.0.0.1 remote 10.0.0.2");
|
||||
SYS(out, "ip link set ipip0 up");
|
||||
SYS(out, "ip addr add 192.168.1.1/16 dev ipip0");
|
||||
ipip_ifindex = if_nametoindex("ipip0");
|
||||
|
||||
bpf_obj = empty_skb__open_and_load();
|
||||
|
@ -142,5 +137,5 @@ out:
|
|||
empty_skb__destroy(bpf_obj);
|
||||
if (tok)
|
||||
close_netns(tok);
|
||||
system("ip netns del empty_skb");
|
||||
SYS_NOFAIL("ip netns del empty_skb");
|
||||
}
|
||||
|
|
|
@ -8,14 +8,6 @@
|
|||
#include "network_helpers.h"
|
||||
#include "fib_lookup.skel.h"
|
||||
|
||||
#define SYS(fmt, ...) \
|
||||
({ \
|
||||
char cmd[1024]; \
|
||||
snprintf(cmd, sizeof(cmd), fmt, ##__VA_ARGS__); \
|
||||
if (!ASSERT_OK(system(cmd), cmd)) \
|
||||
goto fail; \
|
||||
})
|
||||
|
||||
#define NS_TEST "fib_lookup_ns"
|
||||
#define IPV6_IFACE_ADDR "face::face"
|
||||
#define IPV6_NUD_FAILED_ADDR "face::1"
|
||||
|
@ -59,16 +51,16 @@ static int setup_netns(void)
|
|||
{
|
||||
int err;
|
||||
|
||||
SYS("ip link add veth1 type veth peer name veth2");
|
||||
SYS("ip link set dev veth1 up");
|
||||
SYS(fail, "ip link add veth1 type veth peer name veth2");
|
||||
SYS(fail, "ip link set dev veth1 up");
|
||||
|
||||
SYS("ip addr add %s/64 dev veth1 nodad", IPV6_IFACE_ADDR);
|
||||
SYS("ip neigh add %s dev veth1 nud failed", IPV6_NUD_FAILED_ADDR);
|
||||
SYS("ip neigh add %s dev veth1 lladdr %s nud stale", IPV6_NUD_STALE_ADDR, DMAC);
|
||||
SYS(fail, "ip addr add %s/64 dev veth1 nodad", IPV6_IFACE_ADDR);
|
||||
SYS(fail, "ip neigh add %s dev veth1 nud failed", IPV6_NUD_FAILED_ADDR);
|
||||
SYS(fail, "ip neigh add %s dev veth1 lladdr %s nud stale", IPV6_NUD_STALE_ADDR, DMAC);
|
||||
|
||||
SYS("ip addr add %s/24 dev veth1 nodad", IPV4_IFACE_ADDR);
|
||||
SYS("ip neigh add %s dev veth1 nud failed", IPV4_NUD_FAILED_ADDR);
|
||||
SYS("ip neigh add %s dev veth1 lladdr %s nud stale", IPV4_NUD_STALE_ADDR, DMAC);
|
||||
SYS(fail, "ip addr add %s/24 dev veth1 nodad", IPV4_IFACE_ADDR);
|
||||
SYS(fail, "ip neigh add %s dev veth1 nud failed", IPV4_NUD_FAILED_ADDR);
|
||||
SYS(fail, "ip neigh add %s dev veth1 lladdr %s nud stale", IPV4_NUD_STALE_ADDR, DMAC);
|
||||
|
||||
err = write_sysctl("/proc/sys/net/ipv4/conf/veth1/forwarding", "1");
|
||||
if (!ASSERT_OK(err, "write_sysctl(net.ipv4.conf.veth1.forwarding)"))
|
||||
|
@ -140,7 +132,7 @@ void test_fib_lookup(void)
|
|||
return;
|
||||
prog_fd = bpf_program__fd(skel->progs.fib_lookup);
|
||||
|
||||
SYS("ip netns add %s", NS_TEST);
|
||||
SYS(fail, "ip netns add %s", NS_TEST);
|
||||
|
||||
nstoken = open_netns(NS_TEST);
|
||||
if (!ASSERT_OK_PTR(nstoken, "open_netns"))
|
||||
|
@ -182,6 +174,6 @@ void test_fib_lookup(void)
|
|||
fail:
|
||||
if (nstoken)
|
||||
close_netns(nstoken);
|
||||
system("ip netns del " NS_TEST " &> /dev/null");
|
||||
SYS_NOFAIL("ip netns del " NS_TEST " &> /dev/null");
|
||||
fib_lookup__destroy(skel);
|
||||
}
|
||||
|
|
|
@ -345,6 +345,30 @@ struct test tests[] = {
|
|||
.flags = BPF_FLOW_DISSECTOR_F_STOP_AT_FLOW_LABEL,
|
||||
.retval = BPF_OK,
|
||||
},
|
||||
{
|
||||
.name = "ipv6-empty-flow-label",
|
||||
.pkt.ipv6 = {
|
||||
.eth.h_proto = __bpf_constant_htons(ETH_P_IPV6),
|
||||
.iph.nexthdr = IPPROTO_TCP,
|
||||
.iph.payload_len = __bpf_constant_htons(MAGIC_BYTES),
|
||||
.iph.flow_lbl = { 0x00, 0x00, 0x00 },
|
||||
.tcp.doff = 5,
|
||||
.tcp.source = 80,
|
||||
.tcp.dest = 8080,
|
||||
},
|
||||
.keys = {
|
||||
.flags = BPF_FLOW_DISSECTOR_F_STOP_AT_FLOW_LABEL,
|
||||
.nhoff = ETH_HLEN,
|
||||
.thoff = ETH_HLEN + sizeof(struct ipv6hdr),
|
||||
.addr_proto = ETH_P_IPV6,
|
||||
.ip_proto = IPPROTO_TCP,
|
||||
.n_proto = __bpf_constant_htons(ETH_P_IPV6),
|
||||
.sport = 80,
|
||||
.dport = 8080,
|
||||
},
|
||||
.flags = BPF_FLOW_DISSECTOR_F_STOP_AT_FLOW_LABEL,
|
||||
.retval = BPF_OK,
|
||||
},
|
||||
{
|
||||
.name = "ipip-encap",
|
||||
.pkt.ipip = {
|
||||
|
|
|
@ -93,4 +93,6 @@ void test_l4lb_all(void)
|
|||
test_l4lb("test_l4lb.bpf.o");
|
||||
if (test__start_subtest("l4lb_noinline"))
|
||||
test_l4lb("test_l4lb_noinline.bpf.o");
|
||||
if (test__start_subtest("l4lb_noinline_dynptr"))
|
||||
test_l4lb("test_l4lb_noinline_dynptr.bpf.o");
|
||||
}
|
||||
|
|
|
@ -141,7 +141,7 @@ void test_log_fixup(void)
|
|||
if (test__start_subtest("bad_core_relo_trunc_partial"))
|
||||
bad_core_relo(300, TRUNC_PARTIAL /* truncate original log a bit */);
|
||||
if (test__start_subtest("bad_core_relo_trunc_full"))
|
||||
bad_core_relo(250, TRUNC_FULL /* truncate also libbpf's message patch */);
|
||||
bad_core_relo(210, TRUNC_FULL /* truncate also libbpf's message patch */);
|
||||
if (test__start_subtest("bad_core_relo_subprog"))
|
||||
bad_core_relo_subprog();
|
||||
if (test__start_subtest("missing_map"))
|
||||
|
|
|
@ -4,70 +4,160 @@
|
|||
|
||||
#include "map_kptr.skel.h"
|
||||
#include "map_kptr_fail.skel.h"
|
||||
#include "rcu_tasks_trace_gp.skel.h"
|
||||
|
||||
static void test_map_kptr_success(bool test_run)
|
||||
{
|
||||
LIBBPF_OPTS(bpf_test_run_opts, lopts);
|
||||
LIBBPF_OPTS(bpf_test_run_opts, opts,
|
||||
.data_in = &pkt_v4,
|
||||
.data_size_in = sizeof(pkt_v4),
|
||||
.repeat = 1,
|
||||
);
|
||||
int key = 0, ret, cpu;
|
||||
struct map_kptr *skel;
|
||||
int key = 0, ret;
|
||||
char buf[16];
|
||||
char buf[16], *pbuf;
|
||||
|
||||
skel = map_kptr__open_and_load();
|
||||
if (!ASSERT_OK_PTR(skel, "map_kptr__open_and_load"))
|
||||
return;
|
||||
|
||||
ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref), &opts);
|
||||
ASSERT_OK(ret, "test_map_kptr_ref refcount");
|
||||
ASSERT_OK(opts.retval, "test_map_kptr_ref retval");
|
||||
ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref1), &opts);
|
||||
ASSERT_OK(ret, "test_map_kptr_ref1 refcount");
|
||||
ASSERT_OK(opts.retval, "test_map_kptr_ref1 retval");
|
||||
ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref2), &opts);
|
||||
ASSERT_OK(ret, "test_map_kptr_ref2 refcount");
|
||||
ASSERT_OK(opts.retval, "test_map_kptr_ref2 retval");
|
||||
|
||||
ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_ls_map_kptr_ref1), &lopts);
|
||||
ASSERT_OK(ret, "test_ls_map_kptr_ref1 refcount");
|
||||
ASSERT_OK(lopts.retval, "test_ls_map_kptr_ref1 retval");
|
||||
|
||||
ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_ls_map_kptr_ref2), &lopts);
|
||||
ASSERT_OK(ret, "test_ls_map_kptr_ref2 refcount");
|
||||
ASSERT_OK(lopts.retval, "test_ls_map_kptr_ref2 retval");
|
||||
|
||||
if (test_run)
|
||||
goto exit;
|
||||
|
||||
cpu = libbpf_num_possible_cpus();
|
||||
if (!ASSERT_GT(cpu, 0, "libbpf_num_possible_cpus"))
|
||||
goto exit;
|
||||
|
||||
pbuf = calloc(cpu, sizeof(buf));
|
||||
if (!ASSERT_OK_PTR(pbuf, "calloc(pbuf)"))
|
||||
goto exit;
|
||||
|
||||
ret = bpf_map__update_elem(skel->maps.array_map,
|
||||
&key, sizeof(key), buf, sizeof(buf), 0);
|
||||
ASSERT_OK(ret, "array_map update");
|
||||
ret = bpf_map__update_elem(skel->maps.array_map,
|
||||
&key, sizeof(key), buf, sizeof(buf), 0);
|
||||
ASSERT_OK(ret, "array_map update2");
|
||||
skel->data->ref--;
|
||||
ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref3), &opts);
|
||||
ASSERT_OK(ret, "test_map_kptr_ref3 refcount");
|
||||
ASSERT_OK(opts.retval, "test_map_kptr_ref3 retval");
|
||||
|
||||
ret = bpf_map__update_elem(skel->maps.pcpu_array_map,
|
||||
&key, sizeof(key), pbuf, cpu * sizeof(buf), 0);
|
||||
ASSERT_OK(ret, "pcpu_array_map update");
|
||||
skel->data->ref--;
|
||||
ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref3), &opts);
|
||||
ASSERT_OK(ret, "test_map_kptr_ref3 refcount");
|
||||
ASSERT_OK(opts.retval, "test_map_kptr_ref3 retval");
|
||||
|
||||
ret = bpf_map__update_elem(skel->maps.hash_map,
|
||||
&key, sizeof(key), buf, sizeof(buf), 0);
|
||||
ASSERT_OK(ret, "hash_map update");
|
||||
ret = bpf_map__delete_elem(skel->maps.hash_map, &key, sizeof(key), 0);
|
||||
ASSERT_OK(ret, "hash_map delete");
|
||||
skel->data->ref--;
|
||||
ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref3), &opts);
|
||||
ASSERT_OK(ret, "test_map_kptr_ref3 refcount");
|
||||
ASSERT_OK(opts.retval, "test_map_kptr_ref3 retval");
|
||||
|
||||
ret = bpf_map__delete_elem(skel->maps.pcpu_hash_map, &key, sizeof(key), 0);
|
||||
ASSERT_OK(ret, "pcpu_hash_map delete");
|
||||
skel->data->ref--;
|
||||
ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref3), &opts);
|
||||
ASSERT_OK(ret, "test_map_kptr_ref3 refcount");
|
||||
ASSERT_OK(opts.retval, "test_map_kptr_ref3 retval");
|
||||
|
||||
ret = bpf_map__update_elem(skel->maps.hash_malloc_map,
|
||||
&key, sizeof(key), buf, sizeof(buf), 0);
|
||||
ASSERT_OK(ret, "hash_malloc_map update");
|
||||
ret = bpf_map__delete_elem(skel->maps.hash_malloc_map, &key, sizeof(key), 0);
|
||||
ASSERT_OK(ret, "hash_malloc_map delete");
|
||||
skel->data->ref--;
|
||||
ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref3), &opts);
|
||||
ASSERT_OK(ret, "test_map_kptr_ref3 refcount");
|
||||
ASSERT_OK(opts.retval, "test_map_kptr_ref3 retval");
|
||||
|
||||
ret = bpf_map__delete_elem(skel->maps.pcpu_hash_malloc_map, &key, sizeof(key), 0);
|
||||
ASSERT_OK(ret, "pcpu_hash_malloc_map delete");
|
||||
skel->data->ref--;
|
||||
ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref3), &opts);
|
||||
ASSERT_OK(ret, "test_map_kptr_ref3 refcount");
|
||||
ASSERT_OK(opts.retval, "test_map_kptr_ref3 retval");
|
||||
|
||||
ret = bpf_map__update_elem(skel->maps.lru_hash_map,
|
||||
&key, sizeof(key), buf, sizeof(buf), 0);
|
||||
ASSERT_OK(ret, "lru_hash_map update");
|
||||
ret = bpf_map__delete_elem(skel->maps.lru_hash_map, &key, sizeof(key), 0);
|
||||
ASSERT_OK(ret, "lru_hash_map delete");
|
||||
skel->data->ref--;
|
||||
ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref3), &opts);
|
||||
ASSERT_OK(ret, "test_map_kptr_ref3 refcount");
|
||||
ASSERT_OK(opts.retval, "test_map_kptr_ref3 retval");
|
||||
|
||||
ret = bpf_map__delete_elem(skel->maps.lru_pcpu_hash_map, &key, sizeof(key), 0);
|
||||
ASSERT_OK(ret, "lru_pcpu_hash_map delete");
|
||||
skel->data->ref--;
|
||||
ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref3), &opts);
|
||||
ASSERT_OK(ret, "test_map_kptr_ref3 refcount");
|
||||
ASSERT_OK(opts.retval, "test_map_kptr_ref3 retval");
|
||||
|
||||
ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_ls_map_kptr_ref_del), &lopts);
|
||||
ASSERT_OK(ret, "test_ls_map_kptr_ref_del delete");
|
||||
skel->data->ref--;
|
||||
ASSERT_OK(lopts.retval, "test_ls_map_kptr_ref_del retval");
|
||||
|
||||
free(pbuf);
|
||||
exit:
|
||||
map_kptr__destroy(skel);
|
||||
}
|
||||
|
||||
void test_map_kptr(void)
|
||||
static int kern_sync_rcu_tasks_trace(struct rcu_tasks_trace_gp *rcu)
|
||||
{
|
||||
if (test__start_subtest("success")) {
|
||||
long gp_seq = READ_ONCE(rcu->bss->gp_seq);
|
||||
LIBBPF_OPTS(bpf_test_run_opts, opts);
|
||||
|
||||
if (!ASSERT_OK(bpf_prog_test_run_opts(bpf_program__fd(rcu->progs.do_call_rcu_tasks_trace),
|
||||
&opts), "do_call_rcu_tasks_trace"))
|
||||
return -EFAULT;
|
||||
if (!ASSERT_OK(opts.retval, "opts.retval == 0"))
|
||||
return -EFAULT;
|
||||
while (gp_seq == READ_ONCE(rcu->bss->gp_seq))
|
||||
sched_yield();
|
||||
return 0;
|
||||
}
|
||||
|
||||
void serial_test_map_kptr(void)
|
||||
{
|
||||
struct rcu_tasks_trace_gp *skel;
|
||||
|
||||
RUN_TESTS(map_kptr_fail);
|
||||
|
||||
skel = rcu_tasks_trace_gp__open_and_load();
|
||||
if (!ASSERT_OK_PTR(skel, "rcu_tasks_trace_gp__open_and_load"))
|
||||
return;
|
||||
if (!ASSERT_OK(rcu_tasks_trace_gp__attach(skel), "rcu_tasks_trace_gp__attach"))
|
||||
goto end;
|
||||
|
||||
if (test__start_subtest("success-map")) {
|
||||
test_map_kptr_success(true);
|
||||
|
||||
ASSERT_OK(kern_sync_rcu_tasks_trace(skel), "sync rcu_tasks_trace");
|
||||
ASSERT_OK(kern_sync_rcu(), "sync rcu");
|
||||
/* Observe refcount dropping to 1 on bpf_map_free_deferred */
|
||||
test_map_kptr_success(false);
|
||||
/* Do test_run twice, so that we see refcount going back to 1
|
||||
* after we leave it in map from first iteration.
|
||||
*/
|
||||
|
||||
ASSERT_OK(kern_sync_rcu_tasks_trace(skel), "sync rcu_tasks_trace");
|
||||
ASSERT_OK(kern_sync_rcu(), "sync rcu");
|
||||
/* Observe refcount dropping to 1 on synchronous delete elem */
|
||||
test_map_kptr_success(true);
|
||||
}
|
||||
|
||||
RUN_TESTS(map_kptr_fail);
|
||||
end:
|
||||
rcu_tasks_trace_gp__destroy(skel);
|
||||
return;
|
||||
}
|
||||
|
|
|
@ -7,6 +7,8 @@
|
|||
#include "network_helpers.h"
|
||||
#include "mptcp_sock.skel.h"
|
||||
|
||||
#define NS_TEST "mptcp_ns"
|
||||
|
||||
#ifndef TCP_CA_NAME_MAX
|
||||
#define TCP_CA_NAME_MAX 16
|
||||
#endif
|
||||
|
@ -138,12 +140,20 @@ out:
|
|||
|
||||
static void test_base(void)
|
||||
{
|
||||
struct nstoken *nstoken = NULL;
|
||||
int server_fd, cgroup_fd;
|
||||
|
||||
cgroup_fd = test__join_cgroup("/mptcp");
|
||||
if (!ASSERT_GE(cgroup_fd, 0, "test__join_cgroup"))
|
||||
return;
|
||||
|
||||
SYS(fail, "ip netns add %s", NS_TEST);
|
||||
SYS(fail, "ip -net %s link set dev lo up", NS_TEST);
|
||||
|
||||
nstoken = open_netns(NS_TEST);
|
||||
if (!ASSERT_OK_PTR(nstoken, "open_netns"))
|
||||
goto fail;
|
||||
|
||||
/* without MPTCP */
|
||||
server_fd = start_server(AF_INET, SOCK_STREAM, NULL, 0, 0);
|
||||
if (!ASSERT_GE(server_fd, 0, "start_server"))
|
||||
|
@ -157,13 +167,18 @@ with_mptcp:
|
|||
/* with MPTCP */
|
||||
server_fd = start_mptcp_server(AF_INET, NULL, 0, 0);
|
||||
if (!ASSERT_GE(server_fd, 0, "start_mptcp_server"))
|
||||
goto close_cgroup_fd;
|
||||
goto fail;
|
||||
|
||||
ASSERT_OK(run_test(cgroup_fd, server_fd, true), "run_test mptcp");
|
||||
|
||||
close(server_fd);
|
||||
|
||||
close_cgroup_fd:
|
||||
fail:
|
||||
if (nstoken)
|
||||
close_netns(nstoken);
|
||||
|
||||
SYS_NOFAIL("ip netns del " NS_TEST " &> /dev/null");
|
||||
|
||||
close(cgroup_fd);
|
||||
}
|
||||
|
||||
|
|
|
@ -0,0 +1,93 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
#include <test_progs.h>
|
||||
#include <network_helpers.h>
|
||||
#include "test_parse_tcp_hdr_opt.skel.h"
|
||||
#include "test_parse_tcp_hdr_opt_dynptr.skel.h"
|
||||
#include "test_tcp_hdr_options.h"
|
||||
|
||||
struct test_pkt {
|
||||
struct ipv6_packet pk6_v6;
|
||||
u8 options[16];
|
||||
} __packed;
|
||||
|
||||
struct test_pkt pkt = {
|
||||
.pk6_v6.eth.h_proto = __bpf_constant_htons(ETH_P_IPV6),
|
||||
.pk6_v6.iph.nexthdr = IPPROTO_TCP,
|
||||
.pk6_v6.iph.payload_len = __bpf_constant_htons(MAGIC_BYTES),
|
||||
.pk6_v6.tcp.urg_ptr = 123,
|
||||
.pk6_v6.tcp.doff = 9, /* 16 bytes of options */
|
||||
|
||||
.options = {
|
||||
TCPOPT_MSS, 4, 0x05, 0xB4, TCPOPT_NOP, TCPOPT_NOP,
|
||||
0, 6, 0xBB, 0xBB, 0xBB, 0xBB, TCPOPT_EOL
|
||||
},
|
||||
};
|
||||
|
||||
static void test_parse_opt(void)
|
||||
{
|
||||
struct test_parse_tcp_hdr_opt *skel;
|
||||
struct bpf_program *prog;
|
||||
char buf[128];
|
||||
int err;
|
||||
|
||||
LIBBPF_OPTS(bpf_test_run_opts, topts,
|
||||
.data_in = &pkt,
|
||||
.data_size_in = sizeof(pkt),
|
||||
.data_out = buf,
|
||||
.data_size_out = sizeof(buf),
|
||||
.repeat = 3,
|
||||
);
|
||||
|
||||
skel = test_parse_tcp_hdr_opt__open_and_load();
|
||||
if (!ASSERT_OK_PTR(skel, "skel_open_and_load"))
|
||||
return;
|
||||
|
||||
pkt.options[6] = skel->rodata->tcp_hdr_opt_kind_tpr;
|
||||
prog = skel->progs.xdp_ingress_v6;
|
||||
|
||||
err = bpf_prog_test_run_opts(bpf_program__fd(prog), &topts);
|
||||
ASSERT_OK(err, "ipv6 test_run");
|
||||
ASSERT_EQ(topts.retval, XDP_PASS, "ipv6 test_run retval");
|
||||
ASSERT_EQ(skel->bss->server_id, 0xBBBBBBBB, "server id");
|
||||
|
||||
test_parse_tcp_hdr_opt__destroy(skel);
|
||||
}
|
||||
|
||||
static void test_parse_opt_dynptr(void)
|
||||
{
|
||||
struct test_parse_tcp_hdr_opt_dynptr *skel;
|
||||
struct bpf_program *prog;
|
||||
char buf[128];
|
||||
int err;
|
||||
|
||||
LIBBPF_OPTS(bpf_test_run_opts, topts,
|
||||
.data_in = &pkt,
|
||||
.data_size_in = sizeof(pkt),
|
||||
.data_out = buf,
|
||||
.data_size_out = sizeof(buf),
|
||||
.repeat = 3,
|
||||
);
|
||||
|
||||
skel = test_parse_tcp_hdr_opt_dynptr__open_and_load();
|
||||
if (!ASSERT_OK_PTR(skel, "skel_open_and_load"))
|
||||
return;
|
||||
|
||||
pkt.options[6] = skel->rodata->tcp_hdr_opt_kind_tpr;
|
||||
prog = skel->progs.xdp_ingress_v6;
|
||||
|
||||
err = bpf_prog_test_run_opts(bpf_program__fd(prog), &topts);
|
||||
ASSERT_OK(err, "ipv6 test_run");
|
||||
ASSERT_EQ(topts.retval, XDP_PASS, "ipv6 test_run retval");
|
||||
ASSERT_EQ(skel->bss->server_id, 0xBBBBBBBB, "server id");
|
||||
|
||||
test_parse_tcp_hdr_opt_dynptr__destroy(skel);
|
||||
}
|
||||
|
||||
void test_parse_tcp_hdr_opt(void)
|
||||
{
|
||||
if (test__start_subtest("parse_tcp_hdr_opt"))
|
||||
test_parse_opt();
|
||||
if (test__start_subtest("parse_tcp_hdr_opt_dynptr"))
|
||||
test_parse_opt_dynptr();
|
||||
}
|
|
@ -25,10 +25,10 @@ static void test_success(void)
|
|||
|
||||
bpf_program__set_autoload(skel->progs.get_cgroup_id, true);
|
||||
bpf_program__set_autoload(skel->progs.task_succ, true);
|
||||
bpf_program__set_autoload(skel->progs.no_lock, true);
|
||||
bpf_program__set_autoload(skel->progs.two_regions, true);
|
||||
bpf_program__set_autoload(skel->progs.non_sleepable_1, true);
|
||||
bpf_program__set_autoload(skel->progs.non_sleepable_2, true);
|
||||
bpf_program__set_autoload(skel->progs.task_trusted_non_rcuptr, true);
|
||||
err = rcu_read_lock__load(skel);
|
||||
if (!ASSERT_OK(err, "skel_load"))
|
||||
goto out;
|
||||
|
@ -69,6 +69,7 @@ out:
|
|||
|
||||
static const char * const inproper_region_tests[] = {
|
||||
"miss_lock",
|
||||
"no_lock",
|
||||
"miss_unlock",
|
||||
"non_sleepable_rcu_mismatch",
|
||||
"inproper_sleepable_helper",
|
||||
|
@ -99,7 +100,6 @@ out:
|
|||
}
|
||||
|
||||
static const char * const rcuptr_misuse_tests[] = {
|
||||
"task_untrusted_non_rcuptr",
|
||||
"task_untrusted_rcuptr",
|
||||
"cross_rcu_region",
|
||||
};
|
||||
|
@ -128,17 +128,8 @@ out:
|
|||
|
||||
void test_rcu_read_lock(void)
|
||||
{
|
||||
struct btf *vmlinux_btf;
|
||||
int cgroup_fd;
|
||||
|
||||
vmlinux_btf = btf__load_vmlinux_btf();
|
||||
if (!ASSERT_OK_PTR(vmlinux_btf, "could not load vmlinux BTF"))
|
||||
return;
|
||||
if (btf__find_by_name_kind(vmlinux_btf, "rcu", BTF_KIND_TYPE_TAG) < 0) {
|
||||
test__skip();
|
||||
goto out;
|
||||
}
|
||||
|
||||
cgroup_fd = test__join_cgroup("/rcu_read_lock");
|
||||
if (!ASSERT_GE(cgroup_fd, 0, "join_cgroup /rcu_read_lock"))
|
||||
goto out;
|
||||
|
@ -153,6 +144,5 @@ void test_rcu_read_lock(void)
|
|||
if (test__start_subtest("negative_tests_rcuptr_misuse"))
|
||||
test_rcuptr_misuse();
|
||||
close(cgroup_fd);
|
||||
out:
|
||||
btf__free(vmlinux_btf);
|
||||
out:;
|
||||
}
|
||||
|
|
|
@ -137,24 +137,16 @@ static int get_ifaddr(const char *name, char *ifaddr)
|
|||
return 0;
|
||||
}
|
||||
|
||||
#define SYS(fmt, ...) \
|
||||
({ \
|
||||
char cmd[1024]; \
|
||||
snprintf(cmd, sizeof(cmd), fmt, ##__VA_ARGS__); \
|
||||
if (!ASSERT_OK(system(cmd), cmd)) \
|
||||
goto fail; \
|
||||
})
|
||||
|
||||
static int netns_setup_links_and_routes(struct netns_setup_result *result)
|
||||
{
|
||||
struct nstoken *nstoken = NULL;
|
||||
char veth_src_fwd_addr[IFADDR_STR_LEN+1] = {};
|
||||
|
||||
SYS("ip link add veth_src type veth peer name veth_src_fwd");
|
||||
SYS("ip link add veth_dst type veth peer name veth_dst_fwd");
|
||||
SYS(fail, "ip link add veth_src type veth peer name veth_src_fwd");
|
||||
SYS(fail, "ip link add veth_dst type veth peer name veth_dst_fwd");
|
||||
|
||||
SYS("ip link set veth_dst_fwd address " MAC_DST_FWD);
|
||||
SYS("ip link set veth_dst address " MAC_DST);
|
||||
SYS(fail, "ip link set veth_dst_fwd address " MAC_DST_FWD);
|
||||
SYS(fail, "ip link set veth_dst address " MAC_DST);
|
||||
|
||||
if (get_ifaddr("veth_src_fwd", veth_src_fwd_addr))
|
||||
goto fail;
|
||||
|
@ -175,27 +167,27 @@ static int netns_setup_links_and_routes(struct netns_setup_result *result)
|
|||
if (!ASSERT_GT(result->ifindex_veth_dst_fwd, 0, "ifindex_veth_dst_fwd"))
|
||||
goto fail;
|
||||
|
||||
SYS("ip link set veth_src netns " NS_SRC);
|
||||
SYS("ip link set veth_src_fwd netns " NS_FWD);
|
||||
SYS("ip link set veth_dst_fwd netns " NS_FWD);
|
||||
SYS("ip link set veth_dst netns " NS_DST);
|
||||
SYS(fail, "ip link set veth_src netns " NS_SRC);
|
||||
SYS(fail, "ip link set veth_src_fwd netns " NS_FWD);
|
||||
SYS(fail, "ip link set veth_dst_fwd netns " NS_FWD);
|
||||
SYS(fail, "ip link set veth_dst netns " NS_DST);
|
||||
|
||||
/** setup in 'src' namespace */
|
||||
nstoken = open_netns(NS_SRC);
|
||||
if (!ASSERT_OK_PTR(nstoken, "setns src"))
|
||||
goto fail;
|
||||
|
||||
SYS("ip addr add " IP4_SRC "/32 dev veth_src");
|
||||
SYS("ip addr add " IP6_SRC "/128 dev veth_src nodad");
|
||||
SYS("ip link set dev veth_src up");
|
||||
SYS(fail, "ip addr add " IP4_SRC "/32 dev veth_src");
|
||||
SYS(fail, "ip addr add " IP6_SRC "/128 dev veth_src nodad");
|
||||
SYS(fail, "ip link set dev veth_src up");
|
||||
|
||||
SYS("ip route add " IP4_DST "/32 dev veth_src scope global");
|
||||
SYS("ip route add " IP4_NET "/16 dev veth_src scope global");
|
||||
SYS("ip route add " IP6_DST "/128 dev veth_src scope global");
|
||||
SYS(fail, "ip route add " IP4_DST "/32 dev veth_src scope global");
|
||||
SYS(fail, "ip route add " IP4_NET "/16 dev veth_src scope global");
|
||||
SYS(fail, "ip route add " IP6_DST "/128 dev veth_src scope global");
|
||||
|
||||
SYS("ip neigh add " IP4_DST " dev veth_src lladdr %s",
|
||||
SYS(fail, "ip neigh add " IP4_DST " dev veth_src lladdr %s",
|
||||
veth_src_fwd_addr);
|
||||
SYS("ip neigh add " IP6_DST " dev veth_src lladdr %s",
|
||||
SYS(fail, "ip neigh add " IP6_DST " dev veth_src lladdr %s",
|
||||
veth_src_fwd_addr);
|
||||
|
||||
close_netns(nstoken);
|
||||
|
@ -209,15 +201,15 @@ static int netns_setup_links_and_routes(struct netns_setup_result *result)
|
|||
* needs v4 one in order to start ARP probing. IP4_NET route is added
|
||||
* to the endpoints so that the ARP processing will reply.
|
||||
*/
|
||||
SYS("ip addr add " IP4_SLL "/32 dev veth_src_fwd");
|
||||
SYS("ip addr add " IP4_DLL "/32 dev veth_dst_fwd");
|
||||
SYS("ip link set dev veth_src_fwd up");
|
||||
SYS("ip link set dev veth_dst_fwd up");
|
||||
SYS(fail, "ip addr add " IP4_SLL "/32 dev veth_src_fwd");
|
||||
SYS(fail, "ip addr add " IP4_DLL "/32 dev veth_dst_fwd");
|
||||
SYS(fail, "ip link set dev veth_src_fwd up");
|
||||
SYS(fail, "ip link set dev veth_dst_fwd up");
|
||||
|
||||
SYS("ip route add " IP4_SRC "/32 dev veth_src_fwd scope global");
|
||||
SYS("ip route add " IP6_SRC "/128 dev veth_src_fwd scope global");
|
||||
SYS("ip route add " IP4_DST "/32 dev veth_dst_fwd scope global");
|
||||
SYS("ip route add " IP6_DST "/128 dev veth_dst_fwd scope global");
|
||||
SYS(fail, "ip route add " IP4_SRC "/32 dev veth_src_fwd scope global");
|
||||
SYS(fail, "ip route add " IP6_SRC "/128 dev veth_src_fwd scope global");
|
||||
SYS(fail, "ip route add " IP4_DST "/32 dev veth_dst_fwd scope global");
|
||||
SYS(fail, "ip route add " IP6_DST "/128 dev veth_dst_fwd scope global");
|
||||
|
||||
close_netns(nstoken);
|
||||
|
||||
|
@ -226,16 +218,16 @@ static int netns_setup_links_and_routes(struct netns_setup_result *result)
|
|||
if (!ASSERT_OK_PTR(nstoken, "setns dst"))
|
||||
goto fail;
|
||||
|
||||
SYS("ip addr add " IP4_DST "/32 dev veth_dst");
|
||||
SYS("ip addr add " IP6_DST "/128 dev veth_dst nodad");
|
||||
SYS("ip link set dev veth_dst up");
|
||||
SYS(fail, "ip addr add " IP4_DST "/32 dev veth_dst");
|
||||
SYS(fail, "ip addr add " IP6_DST "/128 dev veth_dst nodad");
|
||||
SYS(fail, "ip link set dev veth_dst up");
|
||||
|
||||
SYS("ip route add " IP4_SRC "/32 dev veth_dst scope global");
|
||||
SYS("ip route add " IP4_NET "/16 dev veth_dst scope global");
|
||||
SYS("ip route add " IP6_SRC "/128 dev veth_dst scope global");
|
||||
SYS(fail, "ip route add " IP4_SRC "/32 dev veth_dst scope global");
|
||||
SYS(fail, "ip route add " IP4_NET "/16 dev veth_dst scope global");
|
||||
SYS(fail, "ip route add " IP6_SRC "/128 dev veth_dst scope global");
|
||||
|
||||
SYS("ip neigh add " IP4_SRC " dev veth_dst lladdr " MAC_DST_FWD);
|
||||
SYS("ip neigh add " IP6_SRC " dev veth_dst lladdr " MAC_DST_FWD);
|
||||
SYS(fail, "ip neigh add " IP4_SRC " dev veth_dst lladdr " MAC_DST_FWD);
|
||||
SYS(fail, "ip neigh add " IP6_SRC " dev veth_dst lladdr " MAC_DST_FWD);
|
||||
|
||||
close_netns(nstoken);
|
||||
|
||||
|
@ -375,7 +367,7 @@ done:
|
|||
|
||||
static int test_ping(int family, const char *addr)
|
||||
{
|
||||
SYS("ip netns exec " NS_SRC " %s " PING_ARGS " %s > /dev/null", ping_command(family), addr);
|
||||
SYS(fail, "ip netns exec " NS_SRC " %s " PING_ARGS " %s > /dev/null", ping_command(family), addr);
|
||||
return 0;
|
||||
fail:
|
||||
return -1;
|
||||
|
@ -953,7 +945,7 @@ static int tun_open(char *name)
|
|||
if (!ASSERT_OK(err, "ioctl TUNSETIFF"))
|
||||
goto fail;
|
||||
|
||||
SYS("ip link set dev %s up", name);
|
||||
SYS(fail, "ip link set dev %s up", name);
|
||||
|
||||
return fd;
|
||||
fail:
|
||||
|
@ -1076,23 +1068,23 @@ static void test_tc_redirect_peer_l3(struct netns_setup_result *setup_result)
|
|||
XGRESS_FILTER_ADD(&qdisc_veth_dst_fwd, BPF_TC_EGRESS, skel->progs.tc_chk, 0);
|
||||
|
||||
/* Setup route and neigh tables */
|
||||
SYS("ip -netns " NS_SRC " addr add dev tun_src " IP4_TUN_SRC "/24");
|
||||
SYS("ip -netns " NS_FWD " addr add dev tun_fwd " IP4_TUN_FWD "/24");
|
||||
SYS(fail, "ip -netns " NS_SRC " addr add dev tun_src " IP4_TUN_SRC "/24");
|
||||
SYS(fail, "ip -netns " NS_FWD " addr add dev tun_fwd " IP4_TUN_FWD "/24");
|
||||
|
||||
SYS("ip -netns " NS_SRC " addr add dev tun_src " IP6_TUN_SRC "/64 nodad");
|
||||
SYS("ip -netns " NS_FWD " addr add dev tun_fwd " IP6_TUN_FWD "/64 nodad");
|
||||
SYS(fail, "ip -netns " NS_SRC " addr add dev tun_src " IP6_TUN_SRC "/64 nodad");
|
||||
SYS(fail, "ip -netns " NS_FWD " addr add dev tun_fwd " IP6_TUN_FWD "/64 nodad");
|
||||
|
||||
SYS("ip -netns " NS_SRC " route del " IP4_DST "/32 dev veth_src scope global");
|
||||
SYS("ip -netns " NS_SRC " route add " IP4_DST "/32 via " IP4_TUN_FWD
|
||||
SYS(fail, "ip -netns " NS_SRC " route del " IP4_DST "/32 dev veth_src scope global");
|
||||
SYS(fail, "ip -netns " NS_SRC " route add " IP4_DST "/32 via " IP4_TUN_FWD
|
||||
" dev tun_src scope global");
|
||||
SYS("ip -netns " NS_DST " route add " IP4_TUN_SRC "/32 dev veth_dst scope global");
|
||||
SYS("ip -netns " NS_SRC " route del " IP6_DST "/128 dev veth_src scope global");
|
||||
SYS("ip -netns " NS_SRC " route add " IP6_DST "/128 via " IP6_TUN_FWD
|
||||
SYS(fail, "ip -netns " NS_DST " route add " IP4_TUN_SRC "/32 dev veth_dst scope global");
|
||||
SYS(fail, "ip -netns " NS_SRC " route del " IP6_DST "/128 dev veth_src scope global");
|
||||
SYS(fail, "ip -netns " NS_SRC " route add " IP6_DST "/128 via " IP6_TUN_FWD
|
||||
" dev tun_src scope global");
|
||||
SYS("ip -netns " NS_DST " route add " IP6_TUN_SRC "/128 dev veth_dst scope global");
|
||||
SYS(fail, "ip -netns " NS_DST " route add " IP6_TUN_SRC "/128 dev veth_dst scope global");
|
||||
|
||||
SYS("ip -netns " NS_DST " neigh add " IP4_TUN_SRC " dev veth_dst lladdr " MAC_DST_FWD);
|
||||
SYS("ip -netns " NS_DST " neigh add " IP6_TUN_SRC " dev veth_dst lladdr " MAC_DST_FWD);
|
||||
SYS(fail, "ip -netns " NS_DST " neigh add " IP4_TUN_SRC " dev veth_dst lladdr " MAC_DST_FWD);
|
||||
SYS(fail, "ip -netns " NS_DST " neigh add " IP6_TUN_SRC " dev veth_dst lladdr " MAC_DST_FWD);
|
||||
|
||||
if (!ASSERT_OK(set_forwarding(false), "disable forwarding"))
|
||||
goto fail;
|
||||
|
|
|
@ -91,30 +91,15 @@
|
|||
|
||||
#define PING_ARGS "-i 0.01 -c 3 -w 10 -q"
|
||||
|
||||
#define SYS(fmt, ...) \
|
||||
({ \
|
||||
char cmd[1024]; \
|
||||
snprintf(cmd, sizeof(cmd), fmt, ##__VA_ARGS__); \
|
||||
if (!ASSERT_OK(system(cmd), cmd)) \
|
||||
goto fail; \
|
||||
})
|
||||
|
||||
#define SYS_NOFAIL(fmt, ...) \
|
||||
({ \
|
||||
char cmd[1024]; \
|
||||
snprintf(cmd, sizeof(cmd), fmt, ##__VA_ARGS__); \
|
||||
system(cmd); \
|
||||
})
|
||||
|
||||
static int config_device(void)
|
||||
{
|
||||
SYS("ip netns add at_ns0");
|
||||
SYS("ip link add veth0 address " MAC_VETH1 " type veth peer name veth1");
|
||||
SYS("ip link set veth0 netns at_ns0");
|
||||
SYS("ip addr add " IP4_ADDR1_VETH1 "/24 dev veth1");
|
||||
SYS("ip link set dev veth1 up mtu 1500");
|
||||
SYS("ip netns exec at_ns0 ip addr add " IP4_ADDR_VETH0 "/24 dev veth0");
|
||||
SYS("ip netns exec at_ns0 ip link set dev veth0 up mtu 1500");
|
||||
SYS(fail, "ip netns add at_ns0");
|
||||
SYS(fail, "ip link add veth0 address " MAC_VETH1 " type veth peer name veth1");
|
||||
SYS(fail, "ip link set veth0 netns at_ns0");
|
||||
SYS(fail, "ip addr add " IP4_ADDR1_VETH1 "/24 dev veth1");
|
||||
SYS(fail, "ip link set dev veth1 up mtu 1500");
|
||||
SYS(fail, "ip netns exec at_ns0 ip addr add " IP4_ADDR_VETH0 "/24 dev veth0");
|
||||
SYS(fail, "ip netns exec at_ns0 ip link set dev veth0 up mtu 1500");
|
||||
|
||||
return 0;
|
||||
fail:
|
||||
|
@ -132,23 +117,23 @@ static void cleanup(void)
|
|||
static int add_vxlan_tunnel(void)
|
||||
{
|
||||
/* at_ns0 namespace */
|
||||
SYS("ip netns exec at_ns0 ip link add dev %s type vxlan external gbp dstport 4789",
|
||||
SYS(fail, "ip netns exec at_ns0 ip link add dev %s type vxlan external gbp dstport 4789",
|
||||
VXLAN_TUNL_DEV0);
|
||||
SYS("ip netns exec at_ns0 ip link set dev %s address %s up",
|
||||
SYS(fail, "ip netns exec at_ns0 ip link set dev %s address %s up",
|
||||
VXLAN_TUNL_DEV0, MAC_TUNL_DEV0);
|
||||
SYS("ip netns exec at_ns0 ip addr add dev %s %s/24",
|
||||
SYS(fail, "ip netns exec at_ns0 ip addr add dev %s %s/24",
|
||||
VXLAN_TUNL_DEV0, IP4_ADDR_TUNL_DEV0);
|
||||
SYS("ip netns exec at_ns0 ip neigh add %s lladdr %s dev %s",
|
||||
SYS(fail, "ip netns exec at_ns0 ip neigh add %s lladdr %s dev %s",
|
||||
IP4_ADDR_TUNL_DEV1, MAC_TUNL_DEV1, VXLAN_TUNL_DEV0);
|
||||
SYS("ip netns exec at_ns0 ip neigh add %s lladdr %s dev veth0",
|
||||
SYS(fail, "ip netns exec at_ns0 ip neigh add %s lladdr %s dev veth0",
|
||||
IP4_ADDR2_VETH1, MAC_VETH1);
|
||||
|
||||
/* root namespace */
|
||||
SYS("ip link add dev %s type vxlan external gbp dstport 4789",
|
||||
SYS(fail, "ip link add dev %s type vxlan external gbp dstport 4789",
|
||||
VXLAN_TUNL_DEV1);
|
||||
SYS("ip link set dev %s address %s up", VXLAN_TUNL_DEV1, MAC_TUNL_DEV1);
|
||||
SYS("ip addr add dev %s %s/24", VXLAN_TUNL_DEV1, IP4_ADDR_TUNL_DEV1);
|
||||
SYS("ip neigh add %s lladdr %s dev %s",
|
||||
SYS(fail, "ip link set dev %s address %s up", VXLAN_TUNL_DEV1, MAC_TUNL_DEV1);
|
||||
SYS(fail, "ip addr add dev %s %s/24", VXLAN_TUNL_DEV1, IP4_ADDR_TUNL_DEV1);
|
||||
SYS(fail, "ip neigh add %s lladdr %s dev %s",
|
||||
IP4_ADDR_TUNL_DEV0, MAC_TUNL_DEV0, VXLAN_TUNL_DEV1);
|
||||
|
||||
return 0;
|
||||
|
@ -165,26 +150,26 @@ static void delete_vxlan_tunnel(void)
|
|||
|
||||
static int add_ip6vxlan_tunnel(void)
|
||||
{
|
||||
SYS("ip netns exec at_ns0 ip -6 addr add %s/96 dev veth0",
|
||||
SYS(fail, "ip netns exec at_ns0 ip -6 addr add %s/96 dev veth0",
|
||||
IP6_ADDR_VETH0);
|
||||
SYS("ip netns exec at_ns0 ip link set dev veth0 up");
|
||||
SYS("ip -6 addr add %s/96 dev veth1", IP6_ADDR1_VETH1);
|
||||
SYS("ip -6 addr add %s/96 dev veth1", IP6_ADDR2_VETH1);
|
||||
SYS("ip link set dev veth1 up");
|
||||
SYS(fail, "ip netns exec at_ns0 ip link set dev veth0 up");
|
||||
SYS(fail, "ip -6 addr add %s/96 dev veth1", IP6_ADDR1_VETH1);
|
||||
SYS(fail, "ip -6 addr add %s/96 dev veth1", IP6_ADDR2_VETH1);
|
||||
SYS(fail, "ip link set dev veth1 up");
|
||||
|
||||
/* at_ns0 namespace */
|
||||
SYS("ip netns exec at_ns0 ip link add dev %s type vxlan external dstport 4789",
|
||||
SYS(fail, "ip netns exec at_ns0 ip link add dev %s type vxlan external dstport 4789",
|
||||
IP6VXLAN_TUNL_DEV0);
|
||||
SYS("ip netns exec at_ns0 ip addr add dev %s %s/24",
|
||||
SYS(fail, "ip netns exec at_ns0 ip addr add dev %s %s/24",
|
||||
IP6VXLAN_TUNL_DEV0, IP4_ADDR_TUNL_DEV0);
|
||||
SYS("ip netns exec at_ns0 ip link set dev %s address %s up",
|
||||
SYS(fail, "ip netns exec at_ns0 ip link set dev %s address %s up",
|
||||
IP6VXLAN_TUNL_DEV0, MAC_TUNL_DEV0);
|
||||
|
||||
/* root namespace */
|
||||
SYS("ip link add dev %s type vxlan external dstport 4789",
|
||||
SYS(fail, "ip link add dev %s type vxlan external dstport 4789",
|
||||
IP6VXLAN_TUNL_DEV1);
|
||||
SYS("ip addr add dev %s %s/24", IP6VXLAN_TUNL_DEV1, IP4_ADDR_TUNL_DEV1);
|
||||
SYS("ip link set dev %s address %s up",
|
||||
SYS(fail, "ip addr add dev %s %s/24", IP6VXLAN_TUNL_DEV1, IP4_ADDR_TUNL_DEV1);
|
||||
SYS(fail, "ip link set dev %s address %s up",
|
||||
IP6VXLAN_TUNL_DEV1, MAC_TUNL_DEV1);
|
||||
|
||||
return 0;
|
||||
|
@ -205,7 +190,7 @@ static void delete_ip6vxlan_tunnel(void)
|
|||
|
||||
static int test_ping(int family, const char *addr)
|
||||
{
|
||||
SYS("%s %s %s > /dev/null", ping_command(family), PING_ARGS, addr);
|
||||
SYS(fail, "%s %s %s > /dev/null", ping_command(family), PING_ARGS, addr);
|
||||
return 0;
|
||||
fail:
|
||||
return -1;
|
||||
|
|
|
@ -29,6 +29,9 @@ static int timer(struct timer *timer_skel)
|
|||
/* check that timer_cb2() was executed twice */
|
||||
ASSERT_EQ(timer_skel->bss->bss_data, 10, "bss_data");
|
||||
|
||||
/* check that timer_cb3() was executed twice */
|
||||
ASSERT_EQ(timer_skel->bss->abs_data, 12, "abs_data");
|
||||
|
||||
/* check that there were no errors in timer execution */
|
||||
ASSERT_EQ(timer_skel->bss->err, 0, "err");
|
||||
|
||||
|
|
|
@ -0,0 +1,9 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
#include <test_progs.h>
|
||||
#include "uninit_stack.skel.h"
|
||||
|
||||
void test_uninit_stack(void)
|
||||
{
|
||||
RUN_TESTS(uninit_stack);
|
||||
}
|
|
@ -590,7 +590,7 @@ static void *kick_kernel_cb(void *arg)
|
|||
/* Kick the kernel, causing it to drain the ring buffer and then wake
|
||||
* up the test thread waiting on epoll.
|
||||
*/
|
||||
syscall(__NR_getrlimit);
|
||||
syscall(__NR_prlimit64);
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
|
|
@ -4,11 +4,10 @@
|
|||
#define IFINDEX_LO 1
|
||||
#define XDP_FLAGS_REPLACE (1U << 4)
|
||||
|
||||
void serial_test_xdp_attach(void)
|
||||
static void test_xdp_attach(const char *file)
|
||||
{
|
||||
__u32 duration = 0, id1, id2, id0 = 0, len;
|
||||
struct bpf_object *obj1, *obj2, *obj3;
|
||||
const char *file = "./test_xdp.bpf.o";
|
||||
struct bpf_prog_info info = {};
|
||||
int err, fd1, fd2, fd3;
|
||||
LIBBPF_OPTS(bpf_xdp_attach_opts, opts);
|
||||
|
@ -85,3 +84,11 @@ out_2:
|
|||
out_1:
|
||||
bpf_object__close(obj1);
|
||||
}
|
||||
|
||||
void serial_test_xdp_attach(void)
|
||||
{
|
||||
if (test__start_subtest("xdp_attach"))
|
||||
test_xdp_attach("./test_xdp.bpf.o");
|
||||
if (test__start_subtest("xdp_attach_dynptr"))
|
||||
test_xdp_attach("./test_xdp_dynptr.bpf.o");
|
||||
}
|
||||
|
|
|
@ -141,41 +141,33 @@ static const char * const xmit_policy_names[] = {
|
|||
static int bonding_setup(struct skeletons *skeletons, int mode, int xmit_policy,
|
||||
int bond_both_attach)
|
||||
{
|
||||
#define SYS(fmt, ...) \
|
||||
({ \
|
||||
char cmd[1024]; \
|
||||
snprintf(cmd, sizeof(cmd), fmt, ##__VA_ARGS__); \
|
||||
if (!ASSERT_OK(system(cmd), cmd)) \
|
||||
return -1; \
|
||||
})
|
||||
SYS(fail, "ip netns add ns_dst");
|
||||
SYS(fail, "ip link add veth1_1 type veth peer name veth2_1 netns ns_dst");
|
||||
SYS(fail, "ip link add veth1_2 type veth peer name veth2_2 netns ns_dst");
|
||||
|
||||
SYS("ip netns add ns_dst");
|
||||
SYS("ip link add veth1_1 type veth peer name veth2_1 netns ns_dst");
|
||||
SYS("ip link add veth1_2 type veth peer name veth2_2 netns ns_dst");
|
||||
|
||||
SYS("ip link add bond1 type bond mode %s xmit_hash_policy %s",
|
||||
SYS(fail, "ip link add bond1 type bond mode %s xmit_hash_policy %s",
|
||||
mode_names[mode], xmit_policy_names[xmit_policy]);
|
||||
SYS("ip link set bond1 up address " BOND1_MAC_STR " addrgenmode none");
|
||||
SYS("ip -netns ns_dst link add bond2 type bond mode %s xmit_hash_policy %s",
|
||||
SYS(fail, "ip link set bond1 up address " BOND1_MAC_STR " addrgenmode none");
|
||||
SYS(fail, "ip -netns ns_dst link add bond2 type bond mode %s xmit_hash_policy %s",
|
||||
mode_names[mode], xmit_policy_names[xmit_policy]);
|
||||
SYS("ip -netns ns_dst link set bond2 up address " BOND2_MAC_STR " addrgenmode none");
|
||||
SYS(fail, "ip -netns ns_dst link set bond2 up address " BOND2_MAC_STR " addrgenmode none");
|
||||
|
||||
SYS("ip link set veth1_1 master bond1");
|
||||
SYS(fail, "ip link set veth1_1 master bond1");
|
||||
if (bond_both_attach == BOND_BOTH_AND_ATTACH) {
|
||||
SYS("ip link set veth1_2 master bond1");
|
||||
SYS(fail, "ip link set veth1_2 master bond1");
|
||||
} else {
|
||||
SYS("ip link set veth1_2 up addrgenmode none");
|
||||
SYS(fail, "ip link set veth1_2 up addrgenmode none");
|
||||
|
||||
if (xdp_attach(skeletons, skeletons->xdp_dummy->progs.xdp_dummy_prog, "veth1_2"))
|
||||
return -1;
|
||||
}
|
||||
|
||||
SYS("ip -netns ns_dst link set veth2_1 master bond2");
|
||||
SYS(fail, "ip -netns ns_dst link set veth2_1 master bond2");
|
||||
|
||||
if (bond_both_attach == BOND_BOTH_AND_ATTACH)
|
||||
SYS("ip -netns ns_dst link set veth2_2 master bond2");
|
||||
SYS(fail, "ip -netns ns_dst link set veth2_2 master bond2");
|
||||
else
|
||||
SYS("ip -netns ns_dst link set veth2_2 up addrgenmode none");
|
||||
SYS(fail, "ip -netns ns_dst link set veth2_2 up addrgenmode none");
|
||||
|
||||
/* Load a dummy program on sending side as with veth peer needs to have a
|
||||
* XDP program loaded as well.
|
||||
|
@ -194,8 +186,8 @@ static int bonding_setup(struct skeletons *skeletons, int mode, int xmit_policy,
|
|||
}
|
||||
|
||||
return 0;
|
||||
|
||||
#undef SYS
|
||||
fail:
|
||||
return -1;
|
||||
}
|
||||
|
||||
static void bonding_cleanup(struct skeletons *skeletons)
|
||||
|
|
|
@ -12,14 +12,6 @@
|
|||
#include <uapi/linux/netdev.h>
|
||||
#include "test_xdp_do_redirect.skel.h"
|
||||
|
||||
#define SYS(fmt, ...) \
|
||||
({ \
|
||||
char cmd[1024]; \
|
||||
snprintf(cmd, sizeof(cmd), fmt, ##__VA_ARGS__); \
|
||||
if (!ASSERT_OK(system(cmd), cmd)) \
|
||||
goto out; \
|
||||
})
|
||||
|
||||
struct udp_packet {
|
||||
struct ethhdr eth;
|
||||
struct ipv6hdr iph;
|
||||
|
@ -126,19 +118,19 @@ void test_xdp_do_redirect(void)
|
|||
* iface and NUM_PKTS-2 in the TC hook. We match the packets on the UDP
|
||||
* payload.
|
||||
*/
|
||||
SYS("ip netns add testns");
|
||||
SYS(out, "ip netns add testns");
|
||||
nstoken = open_netns("testns");
|
||||
if (!ASSERT_OK_PTR(nstoken, "setns"))
|
||||
goto out;
|
||||
|
||||
SYS("ip link add veth_src type veth peer name veth_dst");
|
||||
SYS("ip link set dev veth_src address 00:11:22:33:44:55");
|
||||
SYS("ip link set dev veth_dst address 66:77:88:99:aa:bb");
|
||||
SYS("ip link set dev veth_src up");
|
||||
SYS("ip link set dev veth_dst up");
|
||||
SYS("ip addr add dev veth_src fc00::1/64");
|
||||
SYS("ip addr add dev veth_dst fc00::2/64");
|
||||
SYS("ip neigh add fc00::2 dev veth_src lladdr 66:77:88:99:aa:bb");
|
||||
SYS(out, "ip link add veth_src type veth peer name veth_dst");
|
||||
SYS(out, "ip link set dev veth_src address 00:11:22:33:44:55");
|
||||
SYS(out, "ip link set dev veth_dst address 66:77:88:99:aa:bb");
|
||||
SYS(out, "ip link set dev veth_src up");
|
||||
SYS(out, "ip link set dev veth_dst up");
|
||||
SYS(out, "ip addr add dev veth_src fc00::1/64");
|
||||
SYS(out, "ip addr add dev veth_dst fc00::2/64");
|
||||
SYS(out, "ip neigh add fc00::2 dev veth_src lladdr 66:77:88:99:aa:bb");
|
||||
|
||||
/* We enable forwarding in the test namespace because that will cause
|
||||
* the packets that go through the kernel stack (with XDP_PASS) to be
|
||||
|
@ -151,7 +143,7 @@ void test_xdp_do_redirect(void)
|
|||
* code didn't have this, so we keep the test behaviour to make sure the
|
||||
* bug doesn't resurface.
|
||||
*/
|
||||
SYS("sysctl -qw net.ipv6.conf.all.forwarding=1");
|
||||
SYS(out, "sysctl -qw net.ipv6.conf.all.forwarding=1");
|
||||
|
||||
ifindex_src = if_nametoindex("veth_src");
|
||||
ifindex_dst = if_nametoindex("veth_dst");
|
||||
|
@ -225,6 +217,6 @@ out_tc:
|
|||
out:
|
||||
if (nstoken)
|
||||
close_netns(nstoken);
|
||||
system("ip netns del testns");
|
||||
SYS_NOFAIL("ip netns del testns");
|
||||
test_xdp_do_redirect__destroy(skel);
|
||||
}
|
||||
|
|
|
@ -34,11 +34,6 @@
|
|||
#define PREFIX_LEN "8"
|
||||
#define FAMILY AF_INET
|
||||
|
||||
#define SYS(cmd) ({ \
|
||||
if (!ASSERT_OK(system(cmd), (cmd))) \
|
||||
goto out; \
|
||||
})
|
||||
|
||||
struct xsk {
|
||||
void *umem_area;
|
||||
struct xsk_umem *umem;
|
||||
|
@ -298,16 +293,16 @@ void test_xdp_metadata(void)
|
|||
|
||||
/* Setup new networking namespace, with a veth pair. */
|
||||
|
||||
SYS("ip netns add xdp_metadata");
|
||||
SYS(out, "ip netns add xdp_metadata");
|
||||
tok = open_netns("xdp_metadata");
|
||||
SYS("ip link add numtxqueues 1 numrxqueues 1 " TX_NAME
|
||||
SYS(out, "ip link add numtxqueues 1 numrxqueues 1 " TX_NAME
|
||||
" type veth peer " RX_NAME " numtxqueues 1 numrxqueues 1");
|
||||
SYS("ip link set dev " TX_NAME " address 00:00:00:00:00:01");
|
||||
SYS("ip link set dev " RX_NAME " address 00:00:00:00:00:02");
|
||||
SYS("ip link set dev " TX_NAME " up");
|
||||
SYS("ip link set dev " RX_NAME " up");
|
||||
SYS("ip addr add " TX_ADDR "/" PREFIX_LEN " dev " TX_NAME);
|
||||
SYS("ip addr add " RX_ADDR "/" PREFIX_LEN " dev " RX_NAME);
|
||||
SYS(out, "ip link set dev " TX_NAME " address 00:00:00:00:00:01");
|
||||
SYS(out, "ip link set dev " RX_NAME " address 00:00:00:00:00:02");
|
||||
SYS(out, "ip link set dev " TX_NAME " up");
|
||||
SYS(out, "ip link set dev " RX_NAME " up");
|
||||
SYS(out, "ip addr add " TX_ADDR "/" PREFIX_LEN " dev " TX_NAME);
|
||||
SYS(out, "ip addr add " RX_ADDR "/" PREFIX_LEN " dev " RX_NAME);
|
||||
|
||||
rx_ifindex = if_nametoindex(RX_NAME);
|
||||
tx_ifindex = if_nametoindex(TX_NAME);
|
||||
|
@ -405,5 +400,5 @@ out:
|
|||
xdp_metadata__destroy(bpf_obj);
|
||||
if (tok)
|
||||
close_netns(tok);
|
||||
system("ip netns del xdp_metadata");
|
||||
SYS_NOFAIL("ip netns del xdp_metadata");
|
||||
}
|
||||
|
|
|
@ -8,11 +8,6 @@
|
|||
|
||||
#define CMD_OUT_BUF_SIZE 1023
|
||||
|
||||
#define SYS(cmd) ({ \
|
||||
if (!ASSERT_OK(system(cmd), (cmd))) \
|
||||
goto out; \
|
||||
})
|
||||
|
||||
#define SYS_OUT(cmd, ...) ({ \
|
||||
char buf[1024]; \
|
||||
snprintf(buf, sizeof(buf), (cmd), ##__VA_ARGS__); \
|
||||
|
@ -69,37 +64,37 @@ static void test_synproxy(bool xdp)
|
|||
char buf[CMD_OUT_BUF_SIZE];
|
||||
size_t size;
|
||||
|
||||
SYS("ip netns add synproxy");
|
||||
SYS(out, "ip netns add synproxy");
|
||||
|
||||
SYS("ip link add tmp0 type veth peer name tmp1");
|
||||
SYS("ip link set tmp1 netns synproxy");
|
||||
SYS("ip link set tmp0 up");
|
||||
SYS("ip addr replace 198.18.0.1/24 dev tmp0");
|
||||
SYS(out, "ip link add tmp0 type veth peer name tmp1");
|
||||
SYS(out, "ip link set tmp1 netns synproxy");
|
||||
SYS(out, "ip link set tmp0 up");
|
||||
SYS(out, "ip addr replace 198.18.0.1/24 dev tmp0");
|
||||
|
||||
/* When checksum offload is enabled, the XDP program sees wrong
|
||||
* checksums and drops packets.
|
||||
*/
|
||||
SYS("ethtool -K tmp0 tx off");
|
||||
SYS(out, "ethtool -K tmp0 tx off");
|
||||
if (xdp)
|
||||
/* Workaround required for veth. */
|
||||
SYS("ip link set tmp0 xdp object xdp_dummy.bpf.o section xdp 2> /dev/null");
|
||||
SYS(out, "ip link set tmp0 xdp object xdp_dummy.bpf.o section xdp 2> /dev/null");
|
||||
|
||||
ns = open_netns("synproxy");
|
||||
if (!ASSERT_OK_PTR(ns, "setns"))
|
||||
goto out;
|
||||
|
||||
SYS("ip link set lo up");
|
||||
SYS("ip link set tmp1 up");
|
||||
SYS("ip addr replace 198.18.0.2/24 dev tmp1");
|
||||
SYS("sysctl -w net.ipv4.tcp_syncookies=2");
|
||||
SYS("sysctl -w net.ipv4.tcp_timestamps=1");
|
||||
SYS("sysctl -w net.netfilter.nf_conntrack_tcp_loose=0");
|
||||
SYS("iptables-legacy -t raw -I PREROUTING \
|
||||
SYS(out, "ip link set lo up");
|
||||
SYS(out, "ip link set tmp1 up");
|
||||
SYS(out, "ip addr replace 198.18.0.2/24 dev tmp1");
|
||||
SYS(out, "sysctl -w net.ipv4.tcp_syncookies=2");
|
||||
SYS(out, "sysctl -w net.ipv4.tcp_timestamps=1");
|
||||
SYS(out, "sysctl -w net.netfilter.nf_conntrack_tcp_loose=0");
|
||||
SYS(out, "iptables-legacy -t raw -I PREROUTING \
|
||||
-i tmp1 -p tcp -m tcp --syn --dport 8080 -j CT --notrack");
|
||||
SYS("iptables-legacy -t filter -A INPUT \
|
||||
SYS(out, "iptables-legacy -t filter -A INPUT \
|
||||
-i tmp1 -p tcp -m tcp --dport 8080 -m state --state INVALID,UNTRACKED \
|
||||
-j SYNPROXY --sack-perm --timestamp --wscale 7 --mss 1460");
|
||||
SYS("iptables-legacy -t filter -A INPUT \
|
||||
SYS(out, "iptables-legacy -t filter -A INPUT \
|
||||
-i tmp1 -m state --state INVALID -j DROP");
|
||||
|
||||
ctrl_file = SYS_OUT("./xdp_synproxy --iface tmp1 --ports 8080 \
|
||||
|
@ -170,8 +165,8 @@ out:
|
|||
if (ns)
|
||||
close_netns(ns);
|
||||
|
||||
system("ip link del tmp0");
|
||||
system("ip netns del synproxy");
|
||||
SYS_NOFAIL("ip link del tmp0");
|
||||
SYS_NOFAIL("ip netns del synproxy");
|
||||
}
|
||||
|
||||
void test_xdp_synproxy(void)
|
||||
|
|
|
@ -69,21 +69,6 @@
|
|||
"proto esp aead 'rfc4106(gcm(aes))' " \
|
||||
"0xe4d8f4b4da1df18a3510b3781496daa82488b713 128 mode tunnel "
|
||||
|
||||
#define SYS(fmt, ...) \
|
||||
({ \
|
||||
char cmd[1024]; \
|
||||
snprintf(cmd, sizeof(cmd), fmt, ##__VA_ARGS__); \
|
||||
if (!ASSERT_OK(system(cmd), cmd)) \
|
||||
goto fail; \
|
||||
})
|
||||
|
||||
#define SYS_NOFAIL(fmt, ...) \
|
||||
({ \
|
||||
char cmd[1024]; \
|
||||
snprintf(cmd, sizeof(cmd), fmt, ##__VA_ARGS__); \
|
||||
system(cmd); \
|
||||
})
|
||||
|
||||
static int attach_tc_prog(struct bpf_tc_hook *hook, int igr_fd, int egr_fd)
|
||||
{
|
||||
LIBBPF_OPTS(bpf_tc_opts, opts1, .handle = 1, .priority = 1,
|
||||
|
@ -126,23 +111,23 @@ static void cleanup(void)
|
|||
|
||||
static int config_underlay(void)
|
||||
{
|
||||
SYS("ip netns add " NS0);
|
||||
SYS("ip netns add " NS1);
|
||||
SYS("ip netns add " NS2);
|
||||
SYS(fail, "ip netns add " NS0);
|
||||
SYS(fail, "ip netns add " NS1);
|
||||
SYS(fail, "ip netns add " NS2);
|
||||
|
||||
/* NS0 <-> NS1 [veth01 <-> veth10] */
|
||||
SYS("ip link add veth01 netns " NS0 " type veth peer name veth10 netns " NS1);
|
||||
SYS("ip -net " NS0 " addr add " IP4_ADDR_VETH01 "/24 dev veth01");
|
||||
SYS("ip -net " NS0 " link set dev veth01 up");
|
||||
SYS("ip -net " NS1 " addr add " IP4_ADDR_VETH10 "/24 dev veth10");
|
||||
SYS("ip -net " NS1 " link set dev veth10 up");
|
||||
SYS(fail, "ip link add veth01 netns " NS0 " type veth peer name veth10 netns " NS1);
|
||||
SYS(fail, "ip -net " NS0 " addr add " IP4_ADDR_VETH01 "/24 dev veth01");
|
||||
SYS(fail, "ip -net " NS0 " link set dev veth01 up");
|
||||
SYS(fail, "ip -net " NS1 " addr add " IP4_ADDR_VETH10 "/24 dev veth10");
|
||||
SYS(fail, "ip -net " NS1 " link set dev veth10 up");
|
||||
|
||||
/* NS0 <-> NS2 [veth02 <-> veth20] */
|
||||
SYS("ip link add veth02 netns " NS0 " type veth peer name veth20 netns " NS2);
|
||||
SYS("ip -net " NS0 " addr add " IP4_ADDR_VETH02 "/24 dev veth02");
|
||||
SYS("ip -net " NS0 " link set dev veth02 up");
|
||||
SYS("ip -net " NS2 " addr add " IP4_ADDR_VETH20 "/24 dev veth20");
|
||||
SYS("ip -net " NS2 " link set dev veth20 up");
|
||||
SYS(fail, "ip link add veth02 netns " NS0 " type veth peer name veth20 netns " NS2);
|
||||
SYS(fail, "ip -net " NS0 " addr add " IP4_ADDR_VETH02 "/24 dev veth02");
|
||||
SYS(fail, "ip -net " NS0 " link set dev veth02 up");
|
||||
SYS(fail, "ip -net " NS2 " addr add " IP4_ADDR_VETH20 "/24 dev veth20");
|
||||
SYS(fail, "ip -net " NS2 " link set dev veth20 up");
|
||||
|
||||
return 0;
|
||||
fail:
|
||||
|
@ -153,20 +138,20 @@ static int setup_xfrm_tunnel_ns(const char *ns, const char *ipv4_local,
|
|||
const char *ipv4_remote, int if_id)
|
||||
{
|
||||
/* State: local -> remote */
|
||||
SYS("ip -net %s xfrm state add src %s dst %s spi 1 "
|
||||
SYS(fail, "ip -net %s xfrm state add src %s dst %s spi 1 "
|
||||
ESP_DUMMY_PARAMS "if_id %d", ns, ipv4_local, ipv4_remote, if_id);
|
||||
|
||||
/* State: local <- remote */
|
||||
SYS("ip -net %s xfrm state add src %s dst %s spi 1 "
|
||||
SYS(fail, "ip -net %s xfrm state add src %s dst %s spi 1 "
|
||||
ESP_DUMMY_PARAMS "if_id %d", ns, ipv4_remote, ipv4_local, if_id);
|
||||
|
||||
/* Policy: local -> remote */
|
||||
SYS("ip -net %s xfrm policy add dir out src 0.0.0.0/0 dst 0.0.0.0/0 "
|
||||
SYS(fail, "ip -net %s xfrm policy add dir out src 0.0.0.0/0 dst 0.0.0.0/0 "
|
||||
"if_id %d tmpl src %s dst %s proto esp mode tunnel if_id %d", ns,
|
||||
if_id, ipv4_local, ipv4_remote, if_id);
|
||||
|
||||
/* Policy: local <- remote */
|
||||
SYS("ip -net %s xfrm policy add dir in src 0.0.0.0/0 dst 0.0.0.0/0 "
|
||||
SYS(fail, "ip -net %s xfrm policy add dir in src 0.0.0.0/0 dst 0.0.0.0/0 "
|
||||
"if_id %d tmpl src %s dst %s proto esp mode tunnel if_id %d", ns,
|
||||
if_id, ipv4_remote, ipv4_local, if_id);
|
||||
|
||||
|
@ -274,16 +259,16 @@ static int config_overlay(void)
|
|||
if (!ASSERT_OK(setup_xfrmi_external_dev(NS0), "xfrmi"))
|
||||
goto fail;
|
||||
|
||||
SYS("ip -net " NS0 " addr add 192.168.1.100/24 dev ipsec0");
|
||||
SYS("ip -net " NS0 " link set dev ipsec0 up");
|
||||
SYS(fail, "ip -net " NS0 " addr add 192.168.1.100/24 dev ipsec0");
|
||||
SYS(fail, "ip -net " NS0 " link set dev ipsec0 up");
|
||||
|
||||
SYS("ip -net " NS1 " link add ipsec0 type xfrm if_id %d", IF_ID_1);
|
||||
SYS("ip -net " NS1 " addr add 192.168.1.200/24 dev ipsec0");
|
||||
SYS("ip -net " NS1 " link set dev ipsec0 up");
|
||||
SYS(fail, "ip -net " NS1 " link add ipsec0 type xfrm if_id %d", IF_ID_1);
|
||||
SYS(fail, "ip -net " NS1 " addr add 192.168.1.200/24 dev ipsec0");
|
||||
SYS(fail, "ip -net " NS1 " link set dev ipsec0 up");
|
||||
|
||||
SYS("ip -net " NS2 " link add ipsec0 type xfrm if_id %d", IF_ID_2);
|
||||
SYS("ip -net " NS2 " addr add 192.168.1.200/24 dev ipsec0");
|
||||
SYS("ip -net " NS2 " link set dev ipsec0 up");
|
||||
SYS(fail, "ip -net " NS2 " link add ipsec0 type xfrm if_id %d", IF_ID_2);
|
||||
SYS(fail, "ip -net " NS2 " addr add 192.168.1.200/24 dev ipsec0");
|
||||
SYS(fail, "ip -net " NS2 " link set dev ipsec0 up");
|
||||
|
||||
return 0;
|
||||
fail:
|
||||
|
@ -294,7 +279,7 @@ static int test_xfrm_ping(struct xfrm_info *skel, u32 if_id)
|
|||
{
|
||||
skel->bss->req_if_id = if_id;
|
||||
|
||||
SYS("ping -i 0.01 -c 3 -w 10 -q 192.168.1.200 > /dev/null");
|
||||
SYS(fail, "ping -i 0.01 -c 3 -w 10 -q 192.168.1.200 > /dev/null");
|
||||
|
||||
if (!ASSERT_EQ(skel->bss->resp_if_id, if_id, "if_id"))
|
||||
goto fail;
|
||||
|
|
|
@ -337,7 +337,7 @@ PROG(IPV6)(struct __sk_buff *skb)
|
|||
keys->ip_proto = ip6h->nexthdr;
|
||||
keys->flow_label = ip6_flowlabel(ip6h);
|
||||
|
||||
if (keys->flags & BPF_FLOW_DISSECTOR_F_STOP_AT_FLOW_LABEL)
|
||||
if (keys->flow_label && keys->flags & BPF_FLOW_DISSECTOR_F_STOP_AT_FLOW_LABEL)
|
||||
return export_flow_keys(keys, BPF_OK);
|
||||
|
||||
return parse_ipv6_proto(skb, ip6h->nexthdr);
|
||||
|
|
|
@ -2,10 +2,33 @@
|
|||
#ifndef __BPF_MISC_H__
|
||||
#define __BPF_MISC_H__
|
||||
|
||||
/* This set of attributes controls behavior of the
|
||||
* test_loader.c:test_loader__run_subtests().
|
||||
*
|
||||
* __msg Message expected to be found in the verifier log.
|
||||
* Multiple __msg attributes could be specified.
|
||||
*
|
||||
* __success Expect program load success in privileged mode.
|
||||
*
|
||||
* __failure Expect program load failure in privileged mode.
|
||||
*
|
||||
* __log_level Log level to use for the program, numeric value expected.
|
||||
*
|
||||
* __flag Adds one flag use for the program, the following values are valid:
|
||||
* - BPF_F_STRICT_ALIGNMENT;
|
||||
* - BPF_F_TEST_RND_HI32;
|
||||
* - BPF_F_TEST_STATE_FREQ;
|
||||
* - BPF_F_SLEEPABLE;
|
||||
* - BPF_F_XDP_HAS_FRAGS;
|
||||
* - A numeric value.
|
||||
* Multiple __flag attributes could be specified, the final flags
|
||||
* value is derived by applying binary "or" to all specified values.
|
||||
*/
|
||||
#define __msg(msg) __attribute__((btf_decl_tag("comment:test_expect_msg=" msg)))
|
||||
#define __failure __attribute__((btf_decl_tag("comment:test_expect_failure")))
|
||||
#define __success __attribute__((btf_decl_tag("comment:test_expect_success")))
|
||||
#define __log_level(lvl) __attribute__((btf_decl_tag("comment:test_log_level="#lvl)))
|
||||
#define __flag(flag) __attribute__((btf_decl_tag("comment:test_prog_flags="#flag)))
|
||||
|
||||
/* Convenience macro for use with 'asm volatile' blocks */
|
||||
#define __naked __attribute__((naked))
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
#include <bpf/bpf_helpers.h>
|
||||
|
||||
struct map_value {
|
||||
struct prog_test_ref_kfunc __kptr_ref *ptr;
|
||||
struct prog_test_ref_kfunc __kptr *ptr;
|
||||
};
|
||||
|
||||
struct {
|
||||
|
|
|
@ -10,7 +10,7 @@
|
|||
#include <bpf/bpf_tracing.h>
|
||||
|
||||
struct __cgrps_kfunc_map_value {
|
||||
struct cgroup __kptr_ref * cgrp;
|
||||
struct cgroup __kptr * cgrp;
|
||||
};
|
||||
|
||||
struct hash_map {
|
||||
|
@ -24,6 +24,7 @@ struct cgroup *bpf_cgroup_acquire(struct cgroup *p) __ksym;
|
|||
struct cgroup *bpf_cgroup_kptr_get(struct cgroup **pp) __ksym;
|
||||
void bpf_cgroup_release(struct cgroup *p) __ksym;
|
||||
struct cgroup *bpf_cgroup_ancestor(struct cgroup *cgrp, int level) __ksym;
|
||||
struct cgroup *bpf_cgroup_from_id(u64 cgid) __ksym;
|
||||
|
||||
static inline struct __cgrps_kfunc_map_value *cgrps_kfunc_map_value_lookup(struct cgroup *cgrp)
|
||||
{
|
||||
|
|
|
@ -205,7 +205,7 @@ int BPF_PROG(cgrp_kfunc_get_unreleased, struct cgroup *cgrp, const char *path)
|
|||
}
|
||||
|
||||
SEC("tp_btf/cgroup_mkdir")
|
||||
__failure __msg("arg#0 is untrusted_ptr_or_null_ expected ptr_ or socket")
|
||||
__failure __msg("expects refcounted")
|
||||
int BPF_PROG(cgrp_kfunc_release_untrusted, struct cgroup *cgrp, const char *path)
|
||||
{
|
||||
struct __cgrps_kfunc_map_value *v;
|
||||
|
|
|
@ -61,7 +61,7 @@ int BPF_PROG(test_cgrp_acquire_leave_in_map, struct cgroup *cgrp, const char *pa
|
|||
SEC("tp_btf/cgroup_mkdir")
|
||||
int BPF_PROG(test_cgrp_xchg_release, struct cgroup *cgrp, const char *path)
|
||||
{
|
||||
struct cgroup *kptr;
|
||||
struct cgroup *kptr, *cg;
|
||||
struct __cgrps_kfunc_map_value *v;
|
||||
long status;
|
||||
|
||||
|
@ -80,6 +80,16 @@ int BPF_PROG(test_cgrp_xchg_release, struct cgroup *cgrp, const char *path)
|
|||
return 0;
|
||||
}
|
||||
|
||||
kptr = v->cgrp;
|
||||
if (!kptr) {
|
||||
err = 4;
|
||||
return 0;
|
||||
}
|
||||
|
||||
cg = bpf_cgroup_ancestor(kptr, 1);
|
||||
if (cg) /* verifier only check */
|
||||
bpf_cgroup_release(cg);
|
||||
|
||||
kptr = bpf_kptr_xchg(&v->cgrp, NULL);
|
||||
if (!kptr) {
|
||||
err = 3;
|
||||
|
@ -168,3 +178,45 @@ int BPF_PROG(test_cgrp_get_ancestors, struct cgroup *cgrp, const char *path)
|
|||
|
||||
return 0;
|
||||
}
|
||||
|
||||
SEC("tp_btf/cgroup_mkdir")
|
||||
int BPF_PROG(test_cgrp_from_id, struct cgroup *cgrp, const char *path)
|
||||
{
|
||||
struct cgroup *parent, *res;
|
||||
u64 parent_cgid;
|
||||
|
||||
if (!is_test_kfunc_task())
|
||||
return 0;
|
||||
|
||||
/* @cgrp's ID is not visible yet, let's test with the parent */
|
||||
parent = bpf_cgroup_ancestor(cgrp, cgrp->level - 1);
|
||||
if (!parent) {
|
||||
err = 1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
parent_cgid = parent->kn->id;
|
||||
bpf_cgroup_release(parent);
|
||||
|
||||
res = bpf_cgroup_from_id(parent_cgid);
|
||||
if (!res) {
|
||||
err = 2;
|
||||
return 0;
|
||||
}
|
||||
|
||||
bpf_cgroup_release(res);
|
||||
|
||||
if (res != parent) {
|
||||
err = 3;
|
||||
return 0;
|
||||
}
|
||||
|
||||
res = bpf_cgroup_from_id((u64)-1);
|
||||
if (res) {
|
||||
bpf_cgroup_release(res);
|
||||
err = 4;
|
||||
return 0;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -49,7 +49,7 @@ int no_rcu_lock(void *ctx)
|
|||
if (task->pid != target_pid)
|
||||
return 0;
|
||||
|
||||
/* ptr_to_btf_id semantics. should work. */
|
||||
/* task->cgroups is untrusted in sleepable prog outside of RCU CS */
|
||||
cgrp = task->cgroups->dfl_cgrp;
|
||||
ptr = bpf_cgrp_storage_get(&map_a, cgrp, 0,
|
||||
BPF_LOCAL_STORAGE_GET_F_CREATE);
|
||||
|
@ -71,7 +71,7 @@ int yes_rcu_lock(void *ctx)
|
|||
|
||||
bpf_rcu_read_lock();
|
||||
cgrp = task->cgroups->dfl_cgrp;
|
||||
/* cgrp is untrusted and cannot pass to bpf_cgrp_storage_get() helper. */
|
||||
/* cgrp is trusted under RCU CS */
|
||||
ptr = bpf_cgrp_storage_get(&map_a, cgrp, 0, BPF_LOCAL_STORAGE_GET_F_CREATE);
|
||||
if (ptr)
|
||||
cgroup_id = cgrp->kn->id;
|
||||
|
|
|
@ -10,7 +10,7 @@
|
|||
int err;
|
||||
|
||||
struct __cpumask_map_value {
|
||||
struct bpf_cpumask __kptr_ref * cpumask;
|
||||
struct bpf_cpumask __kptr * cpumask;
|
||||
};
|
||||
|
||||
struct array_map {
|
||||
|
|
|
@ -44,7 +44,7 @@ int BPF_PROG(test_alloc_double_release, struct task_struct *task, u64 clone_flag
|
|||
}
|
||||
|
||||
SEC("tp_btf/task_newtask")
|
||||
__failure __msg("bpf_cpumask_acquire args#0 expected pointer to STRUCT bpf_cpumask")
|
||||
__failure __msg("must be referenced")
|
||||
int BPF_PROG(test_acquire_wrong_cpumask, struct task_struct *task, u64 clone_flags)
|
||||
{
|
||||
struct bpf_cpumask *cpumask;
|
||||
|
|
|
@ -5,7 +5,9 @@
|
|||
#include <string.h>
|
||||
#include <linux/bpf.h>
|
||||
#include <bpf/bpf_helpers.h>
|
||||
#include <linux/if_ether.h>
|
||||
#include "bpf_misc.h"
|
||||
#include "bpf_kfuncs.h"
|
||||
|
||||
char _license[] SEC("license") = "GPL";
|
||||
|
||||
|
@ -244,6 +246,27 @@ done:
|
|||
return 0;
|
||||
}
|
||||
|
||||
/* A data slice can't be accessed out of bounds */
|
||||
SEC("?tc")
|
||||
__failure __msg("value is outside of the allowed memory range")
|
||||
int data_slice_out_of_bounds_skb(struct __sk_buff *skb)
|
||||
{
|
||||
struct bpf_dynptr ptr;
|
||||
struct ethhdr *hdr;
|
||||
char buffer[sizeof(*hdr)] = {};
|
||||
|
||||
bpf_dynptr_from_skb(skb, 0, &ptr);
|
||||
|
||||
hdr = bpf_dynptr_slice_rdwr(&ptr, 0, buffer, sizeof(buffer));
|
||||
if (!hdr)
|
||||
return SK_DROP;
|
||||
|
||||
/* this should fail */
|
||||
*(__u8*)(hdr + 1) = 1;
|
||||
|
||||
return SK_PASS;
|
||||
}
|
||||
|
||||
SEC("?raw_tp")
|
||||
__failure __msg("value is outside of the allowed memory range")
|
||||
int data_slice_out_of_bounds_map_value(void *ctx)
|
||||
|
@ -399,7 +422,6 @@ int invalid_helper2(void *ctx)
|
|||
|
||||
/* this should fail */
|
||||
bpf_dynptr_read(read_data, sizeof(read_data), (void *)&ptr + 8, 0, 0);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -1044,6 +1066,193 @@ int dynptr_read_into_slot(void *ctx)
|
|||
return 0;
|
||||
}
|
||||
|
||||
/* bpf_dynptr_slice()s are read-only and cannot be written to */
|
||||
SEC("?tc")
|
||||
__failure __msg("R0 cannot write into rdonly_mem")
|
||||
int skb_invalid_slice_write(struct __sk_buff *skb)
|
||||
{
|
||||
struct bpf_dynptr ptr;
|
||||
struct ethhdr *hdr;
|
||||
char buffer[sizeof(*hdr)] = {};
|
||||
|
||||
bpf_dynptr_from_skb(skb, 0, &ptr);
|
||||
|
||||
hdr = bpf_dynptr_slice(&ptr, 0, buffer, sizeof(buffer));
|
||||
if (!hdr)
|
||||
return SK_DROP;
|
||||
|
||||
/* this should fail */
|
||||
hdr->h_proto = 1;
|
||||
|
||||
return SK_PASS;
|
||||
}
|
||||
|
||||
/* The read-only data slice is invalidated whenever a helper changes packet data */
|
||||
SEC("?tc")
|
||||
__failure __msg("invalid mem access 'scalar'")
|
||||
int skb_invalid_data_slice1(struct __sk_buff *skb)
|
||||
{
|
||||
struct bpf_dynptr ptr;
|
||||
struct ethhdr *hdr;
|
||||
char buffer[sizeof(*hdr)] = {};
|
||||
|
||||
bpf_dynptr_from_skb(skb, 0, &ptr);
|
||||
|
||||
hdr = bpf_dynptr_slice(&ptr, 0, buffer, sizeof(buffer));
|
||||
if (!hdr)
|
||||
return SK_DROP;
|
||||
|
||||
val = hdr->h_proto;
|
||||
|
||||
if (bpf_skb_pull_data(skb, skb->len))
|
||||
return SK_DROP;
|
||||
|
||||
/* this should fail */
|
||||
val = hdr->h_proto;
|
||||
|
||||
return SK_PASS;
|
||||
}
|
||||
|
||||
/* The read-write data slice is invalidated whenever a helper changes packet data */
|
||||
SEC("?tc")
|
||||
__failure __msg("invalid mem access 'scalar'")
|
||||
int skb_invalid_data_slice2(struct __sk_buff *skb)
|
||||
{
|
||||
struct bpf_dynptr ptr;
|
||||
struct ethhdr *hdr;
|
||||
char buffer[sizeof(*hdr)] = {};
|
||||
|
||||
bpf_dynptr_from_skb(skb, 0, &ptr);
|
||||
|
||||
hdr = bpf_dynptr_slice_rdwr(&ptr, 0, buffer, sizeof(buffer));
|
||||
if (!hdr)
|
||||
return SK_DROP;
|
||||
|
||||
hdr->h_proto = 123;
|
||||
|
||||
if (bpf_skb_pull_data(skb, skb->len))
|
||||
return SK_DROP;
|
||||
|
||||
/* this should fail */
|
||||
hdr->h_proto = 1;
|
||||
|
||||
return SK_PASS;
|
||||
}
|
||||
|
||||
/* The read-only data slice is invalidated whenever bpf_dynptr_write() is called */
|
||||
SEC("?tc")
|
||||
__failure __msg("invalid mem access 'scalar'")
|
||||
int skb_invalid_data_slice3(struct __sk_buff *skb)
|
||||
{
|
||||
char write_data[64] = "hello there, world!!";
|
||||
struct bpf_dynptr ptr;
|
||||
struct ethhdr *hdr;
|
||||
char buffer[sizeof(*hdr)] = {};
|
||||
|
||||
bpf_dynptr_from_skb(skb, 0, &ptr);
|
||||
|
||||
hdr = bpf_dynptr_slice(&ptr, 0, buffer, sizeof(buffer));
|
||||
if (!hdr)
|
||||
return SK_DROP;
|
||||
|
||||
val = hdr->h_proto;
|
||||
|
||||
bpf_dynptr_write(&ptr, 0, write_data, sizeof(write_data), 0);
|
||||
|
||||
/* this should fail */
|
||||
val = hdr->h_proto;
|
||||
|
||||
return SK_PASS;
|
||||
}
|
||||
|
||||
/* The read-write data slice is invalidated whenever bpf_dynptr_write() is called */
|
||||
SEC("?tc")
|
||||
__failure __msg("invalid mem access 'scalar'")
|
||||
int skb_invalid_data_slice4(struct __sk_buff *skb)
|
||||
{
|
||||
char write_data[64] = "hello there, world!!";
|
||||
struct bpf_dynptr ptr;
|
||||
struct ethhdr *hdr;
|
||||
char buffer[sizeof(*hdr)] = {};
|
||||
|
||||
bpf_dynptr_from_skb(skb, 0, &ptr);
|
||||
hdr = bpf_dynptr_slice_rdwr(&ptr, 0, buffer, sizeof(buffer));
|
||||
if (!hdr)
|
||||
return SK_DROP;
|
||||
|
||||
hdr->h_proto = 123;
|
||||
|
||||
bpf_dynptr_write(&ptr, 0, write_data, sizeof(write_data), 0);
|
||||
|
||||
/* this should fail */
|
||||
hdr->h_proto = 1;
|
||||
|
||||
return SK_PASS;
|
||||
}
|
||||
|
||||
/* The read-only data slice is invalidated whenever a helper changes packet data */
|
||||
SEC("?xdp")
|
||||
__failure __msg("invalid mem access 'scalar'")
|
||||
int xdp_invalid_data_slice1(struct xdp_md *xdp)
|
||||
{
|
||||
struct bpf_dynptr ptr;
|
||||
struct ethhdr *hdr;
|
||||
char buffer[sizeof(*hdr)] = {};
|
||||
|
||||
bpf_dynptr_from_xdp(xdp, 0, &ptr);
|
||||
hdr = bpf_dynptr_slice(&ptr, 0, buffer, sizeof(buffer));
|
||||
if (!hdr)
|
||||
return SK_DROP;
|
||||
|
||||
val = hdr->h_proto;
|
||||
|
||||
if (bpf_xdp_adjust_head(xdp, 0 - (int)sizeof(*hdr)))
|
||||
return XDP_DROP;
|
||||
|
||||
/* this should fail */
|
||||
val = hdr->h_proto;
|
||||
|
||||
return XDP_PASS;
|
||||
}
|
||||
|
||||
/* The read-write data slice is invalidated whenever a helper changes packet data */
|
||||
SEC("?xdp")
|
||||
__failure __msg("invalid mem access 'scalar'")
|
||||
int xdp_invalid_data_slice2(struct xdp_md *xdp)
|
||||
{
|
||||
struct bpf_dynptr ptr;
|
||||
struct ethhdr *hdr;
|
||||
char buffer[sizeof(*hdr)] = {};
|
||||
|
||||
bpf_dynptr_from_xdp(xdp, 0, &ptr);
|
||||
hdr = bpf_dynptr_slice_rdwr(&ptr, 0, buffer, sizeof(buffer));
|
||||
if (!hdr)
|
||||
return SK_DROP;
|
||||
|
||||
hdr->h_proto = 9;
|
||||
|
||||
if (bpf_xdp_adjust_head(xdp, 0 - (int)sizeof(*hdr)))
|
||||
return XDP_DROP;
|
||||
|
||||
/* this should fail */
|
||||
hdr->h_proto = 1;
|
||||
|
||||
return XDP_PASS;
|
||||
}
|
||||
|
||||
/* Only supported prog type can create skb-type dynptrs */
|
||||
SEC("?raw_tp")
|
||||
__failure __msg("calling kernel function bpf_dynptr_from_skb is not allowed")
|
||||
int skb_invalid_ctx(void *ctx)
|
||||
{
|
||||
struct bpf_dynptr ptr;
|
||||
|
||||
/* this should fail */
|
||||
bpf_dynptr_from_skb(ctx, 0, &ptr);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Reject writes to dynptr slot for uninit arg */
|
||||
SEC("?raw_tp")
|
||||
__failure __msg("potential write to dynptr at off=-16")
|
||||
|
@ -1061,6 +1270,61 @@ int uninit_write_into_slot(void *ctx)
|
|||
return 0;
|
||||
}
|
||||
|
||||
/* Only supported prog type can create xdp-type dynptrs */
|
||||
SEC("?raw_tp")
|
||||
__failure __msg("calling kernel function bpf_dynptr_from_xdp is not allowed")
|
||||
int xdp_invalid_ctx(void *ctx)
|
||||
{
|
||||
struct bpf_dynptr ptr;
|
||||
|
||||
/* this should fail */
|
||||
bpf_dynptr_from_xdp(ctx, 0, &ptr);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
__u32 hdr_size = sizeof(struct ethhdr);
|
||||
/* Can't pass in variable-sized len to bpf_dynptr_slice */
|
||||
SEC("?tc")
|
||||
__failure __msg("unbounded memory access")
|
||||
int dynptr_slice_var_len1(struct __sk_buff *skb)
|
||||
{
|
||||
struct bpf_dynptr ptr;
|
||||
struct ethhdr *hdr;
|
||||
char buffer[sizeof(*hdr)] = {};
|
||||
|
||||
bpf_dynptr_from_skb(skb, 0, &ptr);
|
||||
|
||||
/* this should fail */
|
||||
hdr = bpf_dynptr_slice(&ptr, 0, buffer, hdr_size);
|
||||
if (!hdr)
|
||||
return SK_DROP;
|
||||
|
||||
return SK_PASS;
|
||||
}
|
||||
|
||||
/* Can't pass in variable-sized len to bpf_dynptr_slice */
|
||||
SEC("?tc")
|
||||
__failure __msg("must be a known constant")
|
||||
int dynptr_slice_var_len2(struct __sk_buff *skb)
|
||||
{
|
||||
char buffer[sizeof(struct ethhdr)] = {};
|
||||
struct bpf_dynptr ptr;
|
||||
struct ethhdr *hdr;
|
||||
|
||||
bpf_dynptr_from_skb(skb, 0, &ptr);
|
||||
|
||||
if (hdr_size <= sizeof(buffer)) {
|
||||
/* this should fail */
|
||||
hdr = bpf_dynptr_slice_rdwr(&ptr, 0, buffer, hdr_size);
|
||||
if (!hdr)
|
||||
return SK_DROP;
|
||||
hdr->h_proto = 12;
|
||||
}
|
||||
|
||||
return SK_PASS;
|
||||
}
|
||||
|
||||
static int callback(__u32 index, void *data)
|
||||
{
|
||||
*(__u32 *)data = 123;
|
||||
|
@ -1092,3 +1356,24 @@ int invalid_data_slices(void *ctx)
|
|||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Program types that don't allow writes to packet data should fail if
|
||||
* bpf_dynptr_slice_rdwr is called
|
||||
*/
|
||||
SEC("cgroup_skb/ingress")
|
||||
__failure __msg("the prog does not allow writes to packet data")
|
||||
int invalid_slice_rdwr_rdonly(struct __sk_buff *skb)
|
||||
{
|
||||
char buffer[sizeof(struct ethhdr)] = {};
|
||||
struct bpf_dynptr ptr;
|
||||
struct ethhdr *hdr;
|
||||
|
||||
bpf_dynptr_from_skb(skb, 0, &ptr);
|
||||
|
||||
/* this should fail since cgroup_skb doesn't allow
|
||||
* changing packet data
|
||||
*/
|
||||
hdr = bpf_dynptr_slice_rdwr(&ptr, 0, buffer, sizeof(buffer));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -5,6 +5,7 @@
|
|||
#include <linux/bpf.h>
|
||||
#include <bpf/bpf_helpers.h>
|
||||
#include "bpf_misc.h"
|
||||
#include "bpf_kfuncs.h"
|
||||
#include "errno.h"
|
||||
|
||||
char _license[] SEC("license") = "GPL";
|
||||
|
@ -30,7 +31,7 @@ struct {
|
|||
__type(value, __u32);
|
||||
} array_map SEC(".maps");
|
||||
|
||||
SEC("tp/syscalls/sys_enter_nanosleep")
|
||||
SEC("?tp/syscalls/sys_enter_nanosleep")
|
||||
int test_read_write(void *ctx)
|
||||
{
|
||||
char write_data[64] = "hello there, world!!";
|
||||
|
@ -61,8 +62,8 @@ int test_read_write(void *ctx)
|
|||
return 0;
|
||||
}
|
||||
|
||||
SEC("tp/syscalls/sys_enter_nanosleep")
|
||||
int test_data_slice(void *ctx)
|
||||
SEC("?tp/syscalls/sys_enter_nanosleep")
|
||||
int test_dynptr_data(void *ctx)
|
||||
{
|
||||
__u32 key = 0, val = 235, *map_val;
|
||||
struct bpf_dynptr ptr;
|
||||
|
@ -131,7 +132,7 @@ static int ringbuf_callback(__u32 index, void *data)
|
|||
return 0;
|
||||
}
|
||||
|
||||
SEC("tp/syscalls/sys_enter_nanosleep")
|
||||
SEC("?tp/syscalls/sys_enter_nanosleep")
|
||||
int test_ringbuf(void *ctx)
|
||||
{
|
||||
struct bpf_dynptr ptr;
|
||||
|
@ -163,3 +164,49 @@ done:
|
|||
bpf_ringbuf_discard_dynptr(&ptr, 0);
|
||||
return 0;
|
||||
}
|
||||
|
||||
SEC("?cgroup_skb/egress")
|
||||
int test_skb_readonly(struct __sk_buff *skb)
|
||||
{
|
||||
__u8 write_data[2] = {1, 2};
|
||||
struct bpf_dynptr ptr;
|
||||
__u64 *data;
|
||||
int ret;
|
||||
|
||||
if (bpf_dynptr_from_skb(skb, 0, &ptr)) {
|
||||
err = 1;
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* since cgroup skbs are read only, writes should fail */
|
||||
ret = bpf_dynptr_write(&ptr, 0, write_data, sizeof(write_data), 0);
|
||||
if (ret != -EINVAL) {
|
||||
err = 2;
|
||||
return 1;
|
||||
}
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
SEC("?cgroup_skb/egress")
|
||||
int test_dynptr_skb_data(struct __sk_buff *skb)
|
||||
{
|
||||
__u8 write_data[2] = {1, 2};
|
||||
struct bpf_dynptr ptr;
|
||||
__u64 *data;
|
||||
int ret;
|
||||
|
||||
if (bpf_dynptr_from_skb(skb, 0, &ptr)) {
|
||||
err = 1;
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* This should return NULL. Must use bpf_dynptr_slice API */
|
||||
data = bpf_dynptr_data(&ptr, 0, 1);
|
||||
if (data) {
|
||||
err = 2;
|
||||
return 1;
|
||||
}
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
|
|
@ -13,7 +13,7 @@ static long write_vma(struct task_struct *task, struct vm_area_struct *vma,
|
|||
struct callback_ctx *data)
|
||||
{
|
||||
/* writing to vma, which is illegal */
|
||||
vma->vm_flags |= 0x55;
|
||||
vma->vm_start = 0xffffffffff600000;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
#include <bpf/bpf_tracing.h>
|
||||
#include <bpf/bpf_helpers.h>
|
||||
|
||||
static struct prog_test_ref_kfunc __kptr_ref *v;
|
||||
static struct prog_test_ref_kfunc __kptr *v;
|
||||
long total_sum = -1;
|
||||
|
||||
extern struct prog_test_ref_kfunc *bpf_kfunc_call_test_acquire(unsigned long *sp) __ksym;
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
#include <bpf/bpf_helpers.h>
|
||||
|
||||
struct map_value {
|
||||
struct task_struct __kptr *ptr;
|
||||
struct task_struct __kptr_untrusted *ptr;
|
||||
};
|
||||
|
||||
struct {
|
||||
|
|
|
@ -4,8 +4,8 @@
|
|||
#include <bpf/bpf_helpers.h>
|
||||
|
||||
struct map_value {
|
||||
struct prog_test_ref_kfunc __kptr *unref_ptr;
|
||||
struct prog_test_ref_kfunc __kptr_ref *ref_ptr;
|
||||
struct prog_test_ref_kfunc __kptr_untrusted *unref_ptr;
|
||||
struct prog_test_ref_kfunc __kptr *ref_ptr;
|
||||
};
|
||||
|
||||
struct array_map {
|
||||
|
@ -15,6 +15,13 @@ struct array_map {
|
|||
__uint(max_entries, 1);
|
||||
} array_map SEC(".maps");
|
||||
|
||||
struct pcpu_array_map {
|
||||
__uint(type, BPF_MAP_TYPE_PERCPU_ARRAY);
|
||||
__type(key, int);
|
||||
__type(value, struct map_value);
|
||||
__uint(max_entries, 1);
|
||||
} pcpu_array_map SEC(".maps");
|
||||
|
||||
struct hash_map {
|
||||
__uint(type, BPF_MAP_TYPE_HASH);
|
||||
__type(key, int);
|
||||
|
@ -22,6 +29,13 @@ struct hash_map {
|
|||
__uint(max_entries, 1);
|
||||
} hash_map SEC(".maps");
|
||||
|
||||
struct pcpu_hash_map {
|
||||
__uint(type, BPF_MAP_TYPE_PERCPU_HASH);
|
||||
__type(key, int);
|
||||
__type(value, struct map_value);
|
||||
__uint(max_entries, 1);
|
||||
} pcpu_hash_map SEC(".maps");
|
||||
|
||||
struct hash_malloc_map {
|
||||
__uint(type, BPF_MAP_TYPE_HASH);
|
||||
__type(key, int);
|
||||
|
@ -30,6 +44,14 @@ struct hash_malloc_map {
|
|||
__uint(map_flags, BPF_F_NO_PREALLOC);
|
||||
} hash_malloc_map SEC(".maps");
|
||||
|
||||
struct pcpu_hash_malloc_map {
|
||||
__uint(type, BPF_MAP_TYPE_PERCPU_HASH);
|
||||
__type(key, int);
|
||||
__type(value, struct map_value);
|
||||
__uint(max_entries, 1);
|
||||
__uint(map_flags, BPF_F_NO_PREALLOC);
|
||||
} pcpu_hash_malloc_map SEC(".maps");
|
||||
|
||||
struct lru_hash_map {
|
||||
__uint(type, BPF_MAP_TYPE_LRU_HASH);
|
||||
__type(key, int);
|
||||
|
@ -37,6 +59,41 @@ struct lru_hash_map {
|
|||
__uint(max_entries, 1);
|
||||
} lru_hash_map SEC(".maps");
|
||||
|
||||
struct lru_pcpu_hash_map {
|
||||
__uint(type, BPF_MAP_TYPE_LRU_PERCPU_HASH);
|
||||
__type(key, int);
|
||||
__type(value, struct map_value);
|
||||
__uint(max_entries, 1);
|
||||
} lru_pcpu_hash_map SEC(".maps");
|
||||
|
||||
struct cgrp_ls_map {
|
||||
__uint(type, BPF_MAP_TYPE_CGRP_STORAGE);
|
||||
__uint(map_flags, BPF_F_NO_PREALLOC);
|
||||
__type(key, int);
|
||||
__type(value, struct map_value);
|
||||
} cgrp_ls_map SEC(".maps");
|
||||
|
||||
struct task_ls_map {
|
||||
__uint(type, BPF_MAP_TYPE_TASK_STORAGE);
|
||||
__uint(map_flags, BPF_F_NO_PREALLOC);
|
||||
__type(key, int);
|
||||
__type(value, struct map_value);
|
||||
} task_ls_map SEC(".maps");
|
||||
|
||||
struct inode_ls_map {
|
||||
__uint(type, BPF_MAP_TYPE_INODE_STORAGE);
|
||||
__uint(map_flags, BPF_F_NO_PREALLOC);
|
||||
__type(key, int);
|
||||
__type(value, struct map_value);
|
||||
} inode_ls_map SEC(".maps");
|
||||
|
||||
struct sk_ls_map {
|
||||
__uint(type, BPF_MAP_TYPE_SK_STORAGE);
|
||||
__uint(map_flags, BPF_F_NO_PREALLOC);
|
||||
__type(key, int);
|
||||
__type(value, struct map_value);
|
||||
} sk_ls_map SEC(".maps");
|
||||
|
||||
#define DEFINE_MAP_OF_MAP(map_type, inner_map_type, name) \
|
||||
struct { \
|
||||
__uint(type, map_type); \
|
||||
|
@ -61,6 +118,7 @@ extern struct prog_test_ref_kfunc *bpf_kfunc_call_test_acquire(unsigned long *sp
|
|||
extern struct prog_test_ref_kfunc *
|
||||
bpf_kfunc_call_test_kptr_get(struct prog_test_ref_kfunc **p, int a, int b) __ksym;
|
||||
extern void bpf_kfunc_call_test_release(struct prog_test_ref_kfunc *p) __ksym;
|
||||
void bpf_kfunc_call_test_ref(struct prog_test_ref_kfunc *p) __ksym;
|
||||
|
||||
#define WRITE_ONCE(x, val) ((*(volatile typeof(x) *) &(x)) = (val))
|
||||
|
||||
|
@ -90,12 +148,23 @@ static void test_kptr_ref(struct map_value *v)
|
|||
WRITE_ONCE(v->unref_ptr, p);
|
||||
if (!p)
|
||||
return;
|
||||
/*
|
||||
* p is rcu_ptr_prog_test_ref_kfunc,
|
||||
* because bpf prog is non-sleepable and runs in RCU CS.
|
||||
* p can be passed to kfunc that requires KF_RCU.
|
||||
*/
|
||||
bpf_kfunc_call_test_ref(p);
|
||||
if (p->a + p->b > 100)
|
||||
return;
|
||||
/* store NULL */
|
||||
p = bpf_kptr_xchg(&v->ref_ptr, NULL);
|
||||
if (!p)
|
||||
return;
|
||||
/*
|
||||
* p is trusted_ptr_prog_test_ref_kfunc.
|
||||
* p can be passed to kfunc that requires KF_RCU.
|
||||
*/
|
||||
bpf_kfunc_call_test_ref(p);
|
||||
if (p->a + p->b > 100) {
|
||||
bpf_kfunc_call_test_release(p);
|
||||
return;
|
||||
|
@ -160,6 +229,58 @@ int test_map_kptr(struct __sk_buff *ctx)
|
|||
return 0;
|
||||
}
|
||||
|
||||
SEC("tp_btf/cgroup_mkdir")
|
||||
int BPF_PROG(test_cgrp_map_kptr, struct cgroup *cgrp, const char *path)
|
||||
{
|
||||
struct map_value *v;
|
||||
|
||||
v = bpf_cgrp_storage_get(&cgrp_ls_map, cgrp, NULL, BPF_LOCAL_STORAGE_GET_F_CREATE);
|
||||
if (v)
|
||||
test_kptr(v);
|
||||
return 0;
|
||||
}
|
||||
|
||||
SEC("lsm/inode_unlink")
|
||||
int BPF_PROG(test_task_map_kptr, struct inode *inode, struct dentry *victim)
|
||||
{
|
||||
struct task_struct *task;
|
||||
struct map_value *v;
|
||||
|
||||
task = bpf_get_current_task_btf();
|
||||
if (!task)
|
||||
return 0;
|
||||
v = bpf_task_storage_get(&task_ls_map, task, NULL, BPF_LOCAL_STORAGE_GET_F_CREATE);
|
||||
if (v)
|
||||
test_kptr(v);
|
||||
return 0;
|
||||
}
|
||||
|
||||
SEC("lsm/inode_unlink")
|
||||
int BPF_PROG(test_inode_map_kptr, struct inode *inode, struct dentry *victim)
|
||||
{
|
||||
struct map_value *v;
|
||||
|
||||
v = bpf_inode_storage_get(&inode_ls_map, inode, NULL, BPF_LOCAL_STORAGE_GET_F_CREATE);
|
||||
if (v)
|
||||
test_kptr(v);
|
||||
return 0;
|
||||
}
|
||||
|
||||
SEC("tc")
|
||||
int test_sk_map_kptr(struct __sk_buff *ctx)
|
||||
{
|
||||
struct map_value *v;
|
||||
struct bpf_sock *sk;
|
||||
|
||||
sk = ctx->sk;
|
||||
if (!sk)
|
||||
return 0;
|
||||
v = bpf_sk_storage_get(&sk_ls_map, sk, NULL, BPF_LOCAL_STORAGE_GET_F_CREATE);
|
||||
if (v)
|
||||
test_kptr(v);
|
||||
return 0;
|
||||
}
|
||||
|
||||
SEC("tc")
|
||||
int test_map_in_map_kptr(struct __sk_buff *ctx)
|
||||
{
|
||||
|
@ -189,106 +310,257 @@ int test_map_in_map_kptr(struct __sk_buff *ctx)
|
|||
return 0;
|
||||
}
|
||||
|
||||
SEC("tc")
|
||||
int test_map_kptr_ref(struct __sk_buff *ctx)
|
||||
int ref = 1;
|
||||
|
||||
static __always_inline
|
||||
int test_map_kptr_ref_pre(struct map_value *v)
|
||||
{
|
||||
struct prog_test_ref_kfunc *p, *p_st;
|
||||
unsigned long arg = 0;
|
||||
struct map_value *v;
|
||||
int key = 0, ret;
|
||||
int ret;
|
||||
|
||||
p = bpf_kfunc_call_test_acquire(&arg);
|
||||
if (!p)
|
||||
return 1;
|
||||
ref++;
|
||||
|
||||
p_st = p->next;
|
||||
if (p_st->cnt.refs.counter != 2) {
|
||||
if (p_st->cnt.refs.counter != ref) {
|
||||
ret = 2;
|
||||
goto end;
|
||||
}
|
||||
|
||||
v = bpf_map_lookup_elem(&array_map, &key);
|
||||
if (!v) {
|
||||
p = bpf_kptr_xchg(&v->ref_ptr, p);
|
||||
if (p) {
|
||||
ret = 3;
|
||||
goto end;
|
||||
}
|
||||
|
||||
p = bpf_kptr_xchg(&v->ref_ptr, p);
|
||||
if (p) {
|
||||
ret = 4;
|
||||
goto end;
|
||||
}
|
||||
if (p_st->cnt.refs.counter != 2)
|
||||
return 5;
|
||||
if (p_st->cnt.refs.counter != ref)
|
||||
return 4;
|
||||
|
||||
p = bpf_kfunc_call_test_kptr_get(&v->ref_ptr, 0, 0);
|
||||
if (!p)
|
||||
return 6;
|
||||
if (p_st->cnt.refs.counter != 3) {
|
||||
ret = 7;
|
||||
return 5;
|
||||
ref++;
|
||||
if (p_st->cnt.refs.counter != ref) {
|
||||
ret = 6;
|
||||
goto end;
|
||||
}
|
||||
bpf_kfunc_call_test_release(p);
|
||||
if (p_st->cnt.refs.counter != 2)
|
||||
return 8;
|
||||
ref--;
|
||||
if (p_st->cnt.refs.counter != ref)
|
||||
return 7;
|
||||
|
||||
p = bpf_kptr_xchg(&v->ref_ptr, NULL);
|
||||
if (!p)
|
||||
return 9;
|
||||
return 8;
|
||||
bpf_kfunc_call_test_release(p);
|
||||
if (p_st->cnt.refs.counter != 1)
|
||||
return 10;
|
||||
ref--;
|
||||
if (p_st->cnt.refs.counter != ref)
|
||||
return 9;
|
||||
|
||||
p = bpf_kfunc_call_test_acquire(&arg);
|
||||
if (!p)
|
||||
return 11;
|
||||
return 10;
|
||||
ref++;
|
||||
p = bpf_kptr_xchg(&v->ref_ptr, p);
|
||||
if (p) {
|
||||
ret = 12;
|
||||
ret = 11;
|
||||
goto end;
|
||||
}
|
||||
if (p_st->cnt.refs.counter != 2)
|
||||
return 13;
|
||||
if (p_st->cnt.refs.counter != ref)
|
||||
return 12;
|
||||
/* Leave in map */
|
||||
|
||||
return 0;
|
||||
end:
|
||||
ref--;
|
||||
bpf_kfunc_call_test_release(p);
|
||||
return ret;
|
||||
}
|
||||
|
||||
SEC("tc")
|
||||
int test_map_kptr_ref2(struct __sk_buff *ctx)
|
||||
static __always_inline
|
||||
int test_map_kptr_ref_post(struct map_value *v)
|
||||
{
|
||||
struct prog_test_ref_kfunc *p, *p_st;
|
||||
struct map_value *v;
|
||||
int key = 0;
|
||||
|
||||
v = bpf_map_lookup_elem(&array_map, &key);
|
||||
if (!v)
|
||||
return 1;
|
||||
|
||||
p_st = v->ref_ptr;
|
||||
if (!p_st || p_st->cnt.refs.counter != 2)
|
||||
return 2;
|
||||
if (!p_st || p_st->cnt.refs.counter != ref)
|
||||
return 1;
|
||||
|
||||
p = bpf_kptr_xchg(&v->ref_ptr, NULL);
|
||||
if (!p)
|
||||
return 3;
|
||||
if (p_st->cnt.refs.counter != 2) {
|
||||
return 2;
|
||||
if (p_st->cnt.refs.counter != ref) {
|
||||
bpf_kfunc_call_test_release(p);
|
||||
return 4;
|
||||
return 3;
|
||||
}
|
||||
|
||||
p = bpf_kptr_xchg(&v->ref_ptr, p);
|
||||
if (p) {
|
||||
bpf_kfunc_call_test_release(p);
|
||||
return 5;
|
||||
return 4;
|
||||
}
|
||||
if (p_st->cnt.refs.counter != 2)
|
||||
return 6;
|
||||
if (p_st->cnt.refs.counter != ref)
|
||||
return 5;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
#define TEST(map) \
|
||||
v = bpf_map_lookup_elem(&map, &key); \
|
||||
if (!v) \
|
||||
return -1; \
|
||||
ret = test_map_kptr_ref_pre(v); \
|
||||
if (ret) \
|
||||
return ret;
|
||||
|
||||
#define TEST_PCPU(map) \
|
||||
v = bpf_map_lookup_percpu_elem(&map, &key, 0); \
|
||||
if (!v) \
|
||||
return -1; \
|
||||
ret = test_map_kptr_ref_pre(v); \
|
||||
if (ret) \
|
||||
return ret;
|
||||
|
||||
SEC("tc")
|
||||
int test_map_kptr_ref1(struct __sk_buff *ctx)
|
||||
{
|
||||
struct map_value *v, val = {};
|
||||
int key = 0, ret;
|
||||
|
||||
bpf_map_update_elem(&hash_map, &key, &val, 0);
|
||||
bpf_map_update_elem(&hash_malloc_map, &key, &val, 0);
|
||||
bpf_map_update_elem(&lru_hash_map, &key, &val, 0);
|
||||
|
||||
bpf_map_update_elem(&pcpu_hash_map, &key, &val, 0);
|
||||
bpf_map_update_elem(&pcpu_hash_malloc_map, &key, &val, 0);
|
||||
bpf_map_update_elem(&lru_pcpu_hash_map, &key, &val, 0);
|
||||
|
||||
TEST(array_map);
|
||||
TEST(hash_map);
|
||||
TEST(hash_malloc_map);
|
||||
TEST(lru_hash_map);
|
||||
|
||||
TEST_PCPU(pcpu_array_map);
|
||||
TEST_PCPU(pcpu_hash_map);
|
||||
TEST_PCPU(pcpu_hash_malloc_map);
|
||||
TEST_PCPU(lru_pcpu_hash_map);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
#undef TEST
|
||||
#undef TEST_PCPU
|
||||
|
||||
#define TEST(map) \
|
||||
v = bpf_map_lookup_elem(&map, &key); \
|
||||
if (!v) \
|
||||
return -1; \
|
||||
ret = test_map_kptr_ref_post(v); \
|
||||
if (ret) \
|
||||
return ret;
|
||||
|
||||
#define TEST_PCPU(map) \
|
||||
v = bpf_map_lookup_percpu_elem(&map, &key, 0); \
|
||||
if (!v) \
|
||||
return -1; \
|
||||
ret = test_map_kptr_ref_post(v); \
|
||||
if (ret) \
|
||||
return ret;
|
||||
|
||||
SEC("tc")
|
||||
int test_map_kptr_ref2(struct __sk_buff *ctx)
|
||||
{
|
||||
struct map_value *v;
|
||||
int key = 0, ret;
|
||||
|
||||
TEST(array_map);
|
||||
TEST(hash_map);
|
||||
TEST(hash_malloc_map);
|
||||
TEST(lru_hash_map);
|
||||
|
||||
TEST_PCPU(pcpu_array_map);
|
||||
TEST_PCPU(pcpu_hash_map);
|
||||
TEST_PCPU(pcpu_hash_malloc_map);
|
||||
TEST_PCPU(lru_pcpu_hash_map);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
#undef TEST
|
||||
#undef TEST_PCPU
|
||||
|
||||
SEC("tc")
|
||||
int test_map_kptr_ref3(struct __sk_buff *ctx)
|
||||
{
|
||||
struct prog_test_ref_kfunc *p;
|
||||
unsigned long sp = 0;
|
||||
|
||||
p = bpf_kfunc_call_test_acquire(&sp);
|
||||
if (!p)
|
||||
return 1;
|
||||
ref++;
|
||||
if (p->cnt.refs.counter != ref) {
|
||||
bpf_kfunc_call_test_release(p);
|
||||
return 2;
|
||||
}
|
||||
bpf_kfunc_call_test_release(p);
|
||||
ref--;
|
||||
return 0;
|
||||
}
|
||||
|
||||
SEC("syscall")
|
||||
int test_ls_map_kptr_ref1(void *ctx)
|
||||
{
|
||||
struct task_struct *current;
|
||||
struct map_value *v;
|
||||
int ret;
|
||||
|
||||
current = bpf_get_current_task_btf();
|
||||
if (!current)
|
||||
return 100;
|
||||
v = bpf_task_storage_get(&task_ls_map, current, NULL, 0);
|
||||
if (v)
|
||||
return 150;
|
||||
v = bpf_task_storage_get(&task_ls_map, current, NULL, BPF_LOCAL_STORAGE_GET_F_CREATE);
|
||||
if (!v)
|
||||
return 200;
|
||||
return test_map_kptr_ref_pre(v);
|
||||
}
|
||||
|
||||
SEC("syscall")
|
||||
int test_ls_map_kptr_ref2(void *ctx)
|
||||
{
|
||||
struct task_struct *current;
|
||||
struct map_value *v;
|
||||
int ret;
|
||||
|
||||
current = bpf_get_current_task_btf();
|
||||
if (!current)
|
||||
return 100;
|
||||
v = bpf_task_storage_get(&task_ls_map, current, NULL, 0);
|
||||
if (!v)
|
||||
return 200;
|
||||
return test_map_kptr_ref_post(v);
|
||||
}
|
||||
|
||||
SEC("syscall")
|
||||
int test_ls_map_kptr_ref_del(void *ctx)
|
||||
{
|
||||
struct task_struct *current;
|
||||
struct map_value *v;
|
||||
int ret;
|
||||
|
||||
current = bpf_get_current_task_btf();
|
||||
if (!current)
|
||||
return 100;
|
||||
v = bpf_task_storage_get(&task_ls_map, current, NULL, 0);
|
||||
if (!v)
|
||||
return 200;
|
||||
if (!v->ref_ptr)
|
||||
return 300;
|
||||
return bpf_task_storage_delete(&task_ls_map, current);
|
||||
}
|
||||
|
||||
char _license[] SEC("license") = "GPL";
|
||||
|
|
|
@ -7,9 +7,9 @@
|
|||
|
||||
struct map_value {
|
||||
char buf[8];
|
||||
struct prog_test_ref_kfunc __kptr *unref_ptr;
|
||||
struct prog_test_ref_kfunc __kptr_ref *ref_ptr;
|
||||
struct prog_test_member __kptr_ref *ref_memb_ptr;
|
||||
struct prog_test_ref_kfunc __kptr_untrusted *unref_ptr;
|
||||
struct prog_test_ref_kfunc __kptr *ref_ptr;
|
||||
struct prog_test_member __kptr *ref_memb_ptr;
|
||||
};
|
||||
|
||||
struct array_map {
|
||||
|
@ -281,7 +281,7 @@ int reject_kptr_get_bad_type_match(struct __sk_buff *ctx)
|
|||
}
|
||||
|
||||
SEC("?tc")
|
||||
__failure __msg("R1 type=untrusted_ptr_or_null_ expected=percpu_ptr_")
|
||||
__failure __msg("R1 type=rcu_ptr_or_null_ expected=percpu_ptr_")
|
||||
int mark_ref_as_untrusted_or_null(struct __sk_buff *ctx)
|
||||
{
|
||||
struct map_value *v;
|
||||
|
@ -316,7 +316,7 @@ int reject_untrusted_store_to_ref(struct __sk_buff *ctx)
|
|||
}
|
||||
|
||||
SEC("?tc")
|
||||
__failure __msg("R2 type=untrusted_ptr_ expected=ptr_")
|
||||
__failure __msg("R2 must be referenced")
|
||||
int reject_untrusted_xchg(struct __sk_buff *ctx)
|
||||
{
|
||||
struct prog_test_ref_kfunc *p;
|
||||
|
|
|
@ -17,7 +17,7 @@ char _license[] SEC("license") = "GPL";
|
|||
*/
|
||||
|
||||
SEC("tp_btf/task_newtask")
|
||||
__failure __msg("R2 must be referenced or trusted")
|
||||
__failure __msg("R2 must be")
|
||||
int BPF_PROG(test_invalid_nested_user_cpus, struct task_struct *task, u64 clone_flags)
|
||||
{
|
||||
bpf_cpumask_test_cpu(0, task->user_cpus_ptr);
|
||||
|
|
|
@ -75,7 +75,7 @@ SEC("tc")
|
|||
long rbtree_add_and_remove(void *ctx)
|
||||
{
|
||||
struct bpf_rb_node *res = NULL;
|
||||
struct node_data *n, *m;
|
||||
struct node_data *n, *m = NULL;
|
||||
|
||||
n = bpf_obj_new(typeof(*n));
|
||||
if (!n)
|
||||
|
|
|
@ -232,8 +232,11 @@ long rbtree_api_first_release_unlock_escape(void *ctx)
|
|||
|
||||
bpf_spin_lock(&glock);
|
||||
res = bpf_rbtree_first(&groot);
|
||||
if (res)
|
||||
n = container_of(res, struct node_data, node);
|
||||
if (!res) {
|
||||
bpf_spin_unlock(&glock);
|
||||
return 1;
|
||||
}
|
||||
n = container_of(res, struct node_data, node);
|
||||
bpf_spin_unlock(&glock);
|
||||
|
||||
bpf_spin_lock(&glock);
|
||||
|
|
|
@ -81,7 +81,7 @@ int no_lock(void *ctx)
|
|||
{
|
||||
struct task_struct *task, *real_parent;
|
||||
|
||||
/* no bpf_rcu_read_lock(), old code still works */
|
||||
/* old style ptr_to_btf_id is not allowed in sleepable */
|
||||
task = bpf_get_current_task_btf();
|
||||
real_parent = task->real_parent;
|
||||
(void)bpf_task_storage_get(&map_a, real_parent, 0, 0);
|
||||
|
@ -286,13 +286,13 @@ out:
|
|||
}
|
||||
|
||||
SEC("?fentry.s/" SYS_PREFIX "sys_getpgid")
|
||||
int task_untrusted_non_rcuptr(void *ctx)
|
||||
int task_trusted_non_rcuptr(void *ctx)
|
||||
{
|
||||
struct task_struct *task, *group_leader;
|
||||
|
||||
task = bpf_get_current_task_btf();
|
||||
bpf_rcu_read_lock();
|
||||
/* the pointer group_leader marked as untrusted */
|
||||
/* the pointer group_leader is explicitly marked as trusted */
|
||||
group_leader = task->real_parent->group_leader;
|
||||
(void)bpf_task_storage_get(&map_a, group_leader, 0, 0);
|
||||
bpf_rcu_read_unlock();
|
||||
|
|
Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше
Загрузка…
Ссылка в новой задаче