Граф коммитов

1049563 Коммитов

Автор SHA1 Сообщение Дата
Song Liu 9e2ad638ae bpf: Extend BTF_ID_LIST_GLOBAL with parameter for number of IDs
syzbot reported the following BUG w/o CONFIG_DEBUG_INFO_BTF

BUG: KASAN: global-out-of-bounds in task_iter_init+0x212/0x2e7 kernel/bpf/task_iter.c:661
Read of size 4 at addr ffffffff90297404 by task swapper/0/1

CPU: 1 PID: 1 Comm: swapper/0 Not tainted 5.15.0-syzkaller #0
Hardware name: ... Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106
print_address_description.constprop.0.cold+0xf/0x309 mm/kasan/report.c:256
__kasan_report mm/kasan/report.c:442 [inline]
kasan_report.cold+0x83/0xdf mm/kasan/report.c:459
task_iter_init+0x212/0x2e7 kernel/bpf/task_iter.c:661
do_one_initcall+0x103/0x650 init/main.c:1295
do_initcall_level init/main.c:1368 [inline]
do_initcalls init/main.c:1384 [inline]
do_basic_setup init/main.c:1403 [inline]
kernel_init_freeable+0x6b1/0x73a init/main.c:1606
kernel_init+0x1a/0x1d0 init/main.c:1497
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
</TASK>

This is caused by hard-coded name[1] in BTF_ID_LIST_GLOBAL (w/o
CONFIG_DEBUG_INFO_BTF). Fix this by adding a parameter n to
BTF_ID_LIST_GLOBAL. This avoids ifdef CONFIG_DEBUG_INFO_BTF in btf.c and
filter.c.

Fixes: 7c7e3d31e7 ("bpf: Introduce helper bpf_find_vma")
Reported-by: syzbot+e0d81ec552a21d9071aa@syzkaller.appspotmail.com
Reported-by: Eric Dumazet <edumazet@google.com>
Suggested-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20211112150243.1270987-2-songliubraving@fb.com
2021-11-12 10:19:09 -08:00
Stanislav Fomichev 314f14abde bpftool: Enable libbpf's strict mode by default
Otherwise, attaching with bpftool doesn't work with strict section names.

Also:

  - Add --legacy option to switch back to pre-1.0 behavior
  - Print a warning when program fails to load in strict mode to
    point to --legacy flag
  - By default, don't append / to the section name; in strict
    mode it's relevant only for a small subset of prog types

+ bpftool --legacy prog loadall tools/testing/selftests/bpf/test_cgroup_link.o /sys/fs/bpf/kprobe type kprobe
libbpf: failed to pin program: File exists
Error: failed to pin all programs
+ bpftool prog loadall tools/testing/selftests/bpf/test_cgroup_link.o /sys/fs/bpf/kprobe type kprobe

v1 -> v2:
  - strict by default (Quentin Monnet)
  - add more info to --legacy description (Quentin Monnet)
  - add bash completion (Quentin Monnet)

Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Link: https://lore.kernel.org/bpf/20211110192324.920934-1-sdf@google.com
2021-11-12 16:54:58 +01:00
Alexei Starovoitov 9faaffbe85 Merge branch 'Support BTF_KIND_TYPE_TAG for btf_type_tag attributes'
Yonghong Song says:

====================

LLVM patches ([1] for clang, [2] and [3] for BPF backend)
added support for btf_type_tag attributes. This patch
added support for the kernel.

The main motivation for btf_type_tag is to bring kernel
annotations __user, __rcu etc. to btf. With such information
available in btf, bpf verifier can detect mis-usages
and reject the program. For example, for __user tagged pointer,
developers can then use proper helper like bpf_probe_read_kernel()
etc. to read the data.

BTF_KIND_TYPE_TAG may also useful for other tracing
facility where instead of to require user to specify
kernel/user address type, the kernel can detect it
by itself with btf.

Patch 1 added support in kernel, Patch 2 for libbpf and Patch 3
for bpftool. Patches 4-9 are for bpf selftests and Patch 10
updated docs/bpf/btf.rst file with new btf kind.

  [1] https://reviews.llvm.org/D111199
  [2] https://reviews.llvm.org/D113222
  [3] https://reviews.llvm.org/D113496

Changelogs:
  v2 -> v3:
    - rebase to resolve merge conflicts.
  v1 -> v2:
    - add more dedup tests.
    - remove build requirement for LLVM=1.
    - remove testing macro __has_attribute in bpf programs
      as it is always defined in recent clang compilers.
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2021-11-11 17:41:12 -08:00
Yonghong Song d52f5c639d docs/bpf: Update documentation for BTF_KIND_TYPE_TAG support
Add BTF_KIND_TYPE_TAG documentation in btf.rst.

Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211112012656.1509082-1-yhs@fb.com
2021-11-11 17:41:11 -08:00
Yonghong Song 3f1d0dc0ba selftests/bpf: Clarify llvm dependency with btf_tag selftest
btf_tag selftest needs certain llvm versions (>= llvm14).
Make it clear in the selftests README.rst file.

Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211112012651.1508549-1-yhs@fb.com
2021-11-11 17:41:11 -08:00
Yonghong Song 5698a42a73 selftests/bpf: Add a C test for btf_type_tag
The following is the main btf_type_tag usage in the
C test:
  #define __tag1 __attribute__((btf_type_tag("tag1")))
  #define __tag2 __attribute__((btf_type_tag("tag2")))
  struct btf_type_tag_test {
       int __tag1 * __tag1 __tag2 *p;
  } g;

The bpftool raw dump with related types:
  [4] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED
  [11] STRUCT 'btf_type_tag_test' size=8 vlen=1
          'p' type_id=14 bits_offset=0
  [12] TYPE_TAG 'tag1' type_id=16
  [13] TYPE_TAG 'tag2' type_id=12
  [14] PTR '(anon)' type_id=13
  [15] TYPE_TAG 'tag1' type_id=4
  [16] PTR '(anon)' type_id=15
  [17] VAR 'g' type_id=11, linkage=global

With format C dump, we have
  struct btf_type_tag_test {
        int __attribute__((btf_type_tag("tag1"))) * __attribute__((btf_type_tag("tag1"))) __attribute__((btf_type_tag("tag2"))) *p;
  };
The result C code is identical to the original definition except macro's are gone.

Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211112012646.1508231-1-yhs@fb.com
2021-11-11 17:41:11 -08:00
Yonghong Song 26c79fcbfa selftests/bpf: Rename progs/tag.c to progs/btf_decl_tag.c
Rename progs/tag.c to progs/btf_decl_tag.c so we can introduce
progs/btf_type_tag.c in the next patch.

Also create a subtest for btf_decl_tag in prog_tests/btf_tag.c
so we can introduce btf_type_tag subtest in the next patch.

I also took opportunity to remove the check whether __has_attribute
is defined or not in progs/btf_decl_tag.c since all recent
clangs should already support this macro.

Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211112012641.1507144-1-yhs@fb.com
2021-11-11 17:41:11 -08:00
Yonghong Song 846f4826d1 selftests/bpf: Test BTF_KIND_DECL_TAG for deduplication
Add BTF_KIND_TYPE_TAG duplication unit tests.

Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211112012635.1506853-1-yhs@fb.com
2021-11-11 17:41:11 -08:00
Yonghong Song 6aa5dabc9d selftests/bpf: Add BTF_KIND_TYPE_TAG unit tests
Add BTF_KIND_TYPE_TAG unit tests.

Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211112012630.1506095-1-yhs@fb.com
2021-11-11 17:41:11 -08:00
Yonghong Song 0dc8587220 selftests/bpf: Test libbpf API function btf__add_type_tag()
Add unit tests for btf__add_type_tag().

Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211112012625.1505748-1-yhs@fb.com
2021-11-11 17:41:11 -08:00
Yonghong Song 3da5ba6f05 bpftool: Support BTF_KIND_TYPE_TAG
Add bpftool support for BTF_KIND_TYPE_TAG.

Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211112012620.1505506-1-yhs@fb.com
2021-11-11 17:41:11 -08:00
Yonghong Song 2dc1e488e5 libbpf: Support BTF_KIND_TYPE_TAG
Add libbpf support for BTF_KIND_TYPE_TAG.

Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211112012614.1505315-1-yhs@fb.com
2021-11-11 17:41:11 -08:00
Yonghong Song 8c42d2fa4e bpf: Support BTF_KIND_TYPE_TAG for btf_type_tag attributes
LLVM patches ([1] for clang, [2] and [3] for BPF backend)
added support for btf_type_tag attributes. This patch
added support for the kernel.

The main motivation for btf_type_tag is to bring kernel
annotations __user, __rcu etc. to btf. With such information
available in btf, bpf verifier can detect mis-usages
and reject the program. For example, for __user tagged pointer,
developers can then use proper helper like bpf_probe_read_user()
etc. to read the data.

BTF_KIND_TYPE_TAG may also useful for other tracing
facility where instead of to require user to specify
kernel/user address type, the kernel can detect it
by itself with btf.

  [1] https://reviews.llvm.org/D111199
  [2] https://reviews.llvm.org/D113222
  [3] https://reviews.llvm.org/D113496

Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211112012609.1505032-1-yhs@fb.com
2021-11-11 17:41:11 -08:00
Alexei Starovoitov 2326ff8d5c Merge branch 'Future-proof more tricky libbpf APIs'
Andrii Nakryiko says:

====================

This patch set continues the work of revamping libbpf APIs that are not
extensible, as they were added before we figured out all the intricacies of
building APIs that can preserve ABI compatibility (both backward and forward).

What makes them tricky is that (most of) these APIs are actively used by
multiple applications, so we need to be careful about refactoring them. See
individual patches for details, but the general approach is similar to
previous bpf_prog_load() API revamp. The biggest different and complexity is
in changing btf_dump__new(), because function overloading through macro magic
doesn't work based on number of arguments, as both new and old APIs have
4 arguments. Because of that, another overloading approach is taken; overload
happens based on argument types.

I've validated manually (by using local test_progs-shared flavor that is
compiling test_progs against libbpf as a shared library) that compiling "old
application" (selftests before being adapted to using new variants of revamped
APIs) are compiled and successfully run against newest libbpf version as well
as the older libbpf version (provided no new variants are used). All these
scenarios seem to be working as expected.

v1->v2:
  - add explicit printf_fn NULL check in btf_dump__new() (Alexei);
  - replaced + with || in __builtin_choose_expr() (Alexei);
  - dropped test_progs-shared flavor (Alexei).
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2021-11-11 16:54:06 -08:00
Andrii Nakryiko 164b04f27f bpftool: Update btf_dump__new() and perf_buffer__new_raw() calls
Use v1.0-compatible variants of btf_dump and perf_buffer "constructors".
This is also a demonstration of reusing struct perf_buffer_raw_opts as
OPTS-style option struct for new perf_buffer__new_raw() API.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211111053624.190580-10-andrii@kernel.org
2021-11-11 16:54:06 -08:00
Andrii Nakryiko eda8bfa5b7 tools/runqslower: Update perf_buffer__new() calls
Use v1.0+ compatible variant of perf_buffer__new() call to prepare for
deprecation.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211111053624.190580-9-andrii@kernel.org
2021-11-11 16:54:06 -08:00
Andrii Nakryiko 60ba87bb6b selftests/bpf: Update btf_dump__new() uses to v1.0+ variant
Update to-be-deprecated forms of btf_dump__new().

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211111053624.190580-8-andrii@kernel.org
2021-11-11 16:54:06 -08:00
Andrii Nakryiko 0b52a5f4b9 selftests/bpf: Migrate all deprecated perf_buffer uses
Migrate all old-style perf_buffer__new() and perf_buffer__new_raw()
calls to new v1.0+ variants.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211111053624.190580-7-andrii@kernel.org
2021-11-11 16:54:05 -08:00
Andrii Nakryiko 4178893465 libbpf: Make perf_buffer__new() use OPTS-based interface
Add new variants of perf_buffer__new() and perf_buffer__new_raw() that
use OPTS-based options for future extensibility ([0]). Given all the
currently used API names are best fits, re-use them and use
___libbpf_override() approach and symbol versioning to preserve ABI and
source code compatibility. struct perf_buffer_opts and struct
perf_buffer_raw_opts are kept as well, but they are restructured such
that they are OPTS-based when used with new APIs. For struct
perf_buffer_raw_opts we keep few fields intact, so we have to also
preserve the memory location of them both when used as OPTS and for
legacy API variants. This is achieved with anonymous padding for OPTS
"incarnation" of the struct.  These pads can be eventually used for new
options.

  [0] Closes: https://github.com/libbpf/libbpf/issues/311

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211111053624.190580-6-andrii@kernel.org
2021-11-11 16:54:05 -08:00
Andrii Nakryiko 6084f5dc92 libbpf: Ensure btf_dump__new() and btf_dump_opts are future-proof
Change btf_dump__new() and corresponding struct btf_dump_ops structure
to be extensible by using OPTS "framework" ([0]). Given we don't change
the names, we use a similar approach as with bpf_prog_load(), but this
time we ended up with two APIs with the same name and same number of
arguments, so overloading based on number of arguments with
___libbpf_override() doesn't work.

Instead, use "overloading" based on types. In this particular case,
print callback has to be specified, so we detect which argument is
a callback. If it's 4th (last) argument, old implementation of API is
used by user code. If not, it must be 2nd, and thus new implementation
is selected. The rest is handled by the same symbol versioning approach.

btf_ext argument is dropped as it was never used and isn't necessary
either. If in the future we'll need btf_ext, that will be added into
OPTS-based struct btf_dump_opts.

struct btf_dump_opts is reused for both old API and new APIs. ctx field
is marked deprecated in v0.7+ and it's put at the same memory location
as OPTS's sz field. Any user of new-style btf_dump__new() will have to
set sz field and doesn't/shouldn't use ctx, as ctx is now passed along
the callback as mandatory input argument, following the other APIs in
libbpf that accept callbacks consistently.

Again, this is quite ugly in implementation, but is done in the name of
backwards compatibility and uniform and extensible future APIs (at the
same time, sigh). And it will be gone in libbpf 1.0.

  [0] Closes: https://github.com/libbpf/libbpf/issues/283

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211111053624.190580-5-andrii@kernel.org
2021-11-11 16:54:05 -08:00
Andrii Nakryiko 957d350a8b libbpf: Turn btf_dedup_opts into OPTS-based struct
btf__dedup() and struct btf_dedup_opts were added before we figured out
OPTS mechanism. As such, btf_dedup_opts is non-extensible without
breaking an ABI and potentially crashing user application.

Unfortunately, btf__dedup() and btf_dedup_opts are short and succinct
names that would be great to preserve and use going forward. So we use
___libbpf_override() macro approach, used previously for bpf_prog_load()
API, to define a new btf__dedup() variant that accepts only struct btf *
and struct btf_dedup_opts * arguments, and rename the old btf__dedup()
implementation into btf__dedup_deprecated(). This keeps both source and
binary compatibility with old and new applications.

The biggest problem was struct btf_dedup_opts, which wasn't OPTS-based,
and as such doesn't have `size_t sz;` as a first field. But btf__dedup()
is a pretty rarely used API and I believe that the only currently known
users (besides selftests) are libbpf's own bpf_linker and pahole.
Neither use case actually uses options and just passes NULL. So instead
of doing extra hacks, just rewrite struct btf_dedup_opts into OPTS-based
one, move btf_ext argument into those opts (only bpf_linker needs to
dedup btf_ext, so it's not a typical thing to specify), and drop never
used `dont_resolve_fwds` option (it was never used anywhere, AFAIK, it
makes BTF dedup much less useful and efficient).

Just in case, for old implementation, btf__dedup_deprecated(), detect
non-NULL options and error out with helpful message, to help users
migrate, if there are any user playing with btf__dedup().

The last remaining piece is dedup_table_size, which is another
anachronism from very early days of BTF dedup. Since then it has been
reduced to the only valid value, 1, to request forced hash collisions.
This is only used during testing. So instead introduce a bool flag to
force collisions explicitly.

This patch also adapts selftests to new btf__dedup() and btf_dedup_opts
use to avoid selftests breakage.

  [0] Closes: https://github.com/libbpf/libbpf/issues/281

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211111053624.190580-4-andrii@kernel.org
2021-11-11 16:54:05 -08:00
Andrii Nakryiko de29e6bbb9 selftests/bpf: Minor cleanups and normalization of Makefile
Few clean ups and single-line simplifications. Also split CLEAN command
into multiple $(RM) invocations as it gets dangerously close to too long
argument list. Make sure that -o <output.o> is used always as the last
argument for saner verbose make output.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211111053624.190580-3-andrii@kernel.org
2021-11-11 16:54:05 -08:00
Andrii Nakryiko 6501182c08 bpftool: Normalize compile rules to specify output file last
When dealing with verbose Makefile output, it's extremely confusing when
compiler invocation commands don't specify -o <output.o> as the last
argument. Normalize bpftool's Makefile to do just that, as most other
BPF-related Makefiles are already doing that.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211111053624.190580-2-andrii@kernel.org
2021-11-11 16:54:05 -08:00
Alexei Starovoitov 3a75111d8a Merge branch 'selftests/bpf: fix test_progs' log_level logic'
Andrii Nakryiko says:

====================

Fix the ability to request verbose (log_level=1) or very verbose (log_level=2)
logs with test_progs's -vv or -vvv parameters. This ability regressed during
recent bpf_prog_load() API refactoring. Also add
bpf_program__set_extra_flags() API to allow setting extra testing flags
(BPF_F_TEST_RND_HI32), which was also dropped during recent changes.
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2021-11-11 16:44:26 -08:00
Andrii Nakryiko 50dee7078b selftests/bpf: Fix bpf_prog_test_load() logic to pass extra log level
After recent refactoring bpf_prog_test_load(), used across multiple
selftests, lost ability to specify extra log_level 1 or 2 (for -vv and
-vvv, respectively). Fix that problem by using bpf_object__load_xattr()
API that supports extra log_level flags. Also restore
BPF_F_TEST_RND_HI32 prog_flags by utilizing new bpf_program__set_extra_flags()
API.

Fixes: f87c1930ac ("selftests/bpf: Merge test_stub.c into testing_helpers.c")
Reported-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211111051758.92283-3-andrii@kernel.org
2021-11-11 16:44:26 -08:00
Andrii Nakryiko a6ca715831 libbpf: Add ability to get/set per-program load flags
Add bpf_program__flags() API to retrieve prog_flags that will be (or
were) supplied to BPF_PROG_LOAD command.

Also add bpf_program__set_extra_flags() API to allow to set *extra*
flags, in addition to those determined by program's SEC() definition.
Such flags are logically OR'ed with libbpf-derived flags.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211111051758.92283-2-andrii@kernel.org
2021-11-11 16:44:26 -08:00
Alexei Starovoitov 04f8cb6d6b Merge branch 'Get ingress_ifindex in BPF_SK_LOOKUP prog type'
Mark Pashmfouroush says:

====================

BPF_SK_LOOKUP users may want to have access to the ifindex of the skb
which triggered the socket lookup. This may be useful for selectively
applying programmable socket lookup logic to packets that arrive on a
specific interface, or excluding packets from an interface.

v3:
- Rename ifindex field to ingress_ifindex for consistency. (Yonghong)

v2:
- Fix inaccurate comment (Alexei)
- Add more details to commit messages (John)
====================

Revieview-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2021-11-10 16:30:17 -08:00
Mark Pashmfouroush 8b4fd2bf1f selftests/bpf: Add tests for accessing ingress_ifindex in bpf_sk_lookup
A new field was added to the bpf_sk_lookup data that users can access.
Add tests that validate that the new ingress_ifindex field contains the
right data.

Signed-off-by: Mark Pashmfouroush <markpash@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211110111016.5670-3-markpash@cloudflare.com
2021-11-10 16:29:59 -08:00
Mark Pashmfouroush f89315650b bpf: Add ingress_ifindex to bpf_sk_lookup
It may be helpful to have access to the ifindex during bpf socket
lookup. An example may be to scope certain socket lookup logic to
specific interfaces, i.e. an interface may be made exempt from custom
lookup code.

Add the ifindex of the arriving connection to the bpf_sk_lookup API.

Signed-off-by: Mark Pashmfouroush <markpash@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211110111016.5670-2-markpash@cloudflare.com
2021-11-10 16:29:58 -08:00
Quentin Monnet 1a8b597dda bpftool: Fix SPDX tag for Makefiles and .gitignore
Bpftool is dual-licensed under GPLv2 and BSD-2-Clause. In commit
907b223651 ("tools: bpftool: dual license all files") we made sure
that all its source files were indeed covered by the two licenses, and
that they had the correct SPDX tags.

However, bpftool's Makefile, the Makefile for its documentation, and the
.gitignore file were skipped at the time (their GPL-2.0-only tag was
added later). Let's update the tags.

Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Tobias Klauser <tklauser@distanz.ch>
Acked-by: Joe Stringer <joe@cilium.io>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Acked-by: Jakub Kicinski <kuba@kernel.org>
Link: https://lore.kernel.org/bpf/20211105221904.3536-1-quentin@isovalent.com
2021-11-10 09:00:52 -08:00
Kumar Kartikeya Dwivedi 3a74ac2d11 libbpf: Compile using -std=gnu89
The minimum supported C standard version is C89, with use of GNU
extensions, hence make sure to catch any instances that would break
the build for this mode by passing -std=gnu89.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211105234243.390179-4-memxor@gmail.com
2021-11-09 13:27:52 -08:00
Alan Maguire c23551c9c3 selftests/bpf: Add exception handling selftests for tp_bpf program
Exception handling is triggered in BPF tracing programs when a NULL pointer
is dereferenced; the exception handler zeroes the target register and
execution of the BPF program progresses.

To test exception handling then, we need to trigger a NULL pointer dereference
for a field which should never be zero; if it is, the only explanation is the
exception handler ran. task->task_works is the NULL pointer chosen (for a new
task from fork() no work is associated), and the task_works->func field should
not be zero if task_works is non-NULL. The test verifies that task_works and
task_works->func are 0.

Signed-off-by: Alan Maguire <alan.maguire@oracle.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/1636131046-5982-3-git-send-email-alan.maguire@oracle.com
2021-11-08 22:17:55 +01:00
Russell King b89ddf4cca arm64/bpf: Remove 128MB limit for BPF JIT programs
Commit 91fc957c9b ("arm64/bpf: don't allocate BPF JIT programs in module
memory") restricts BPF JIT program allocation to a 128MB region to ensure
BPF programs are still in branching range of each other. However this
restriction should not apply to the aarch64 JIT, since BPF_JMP | BPF_CALL
are implemented as a 64-bit move into a register and then a BLR instruction -
which has the effect of being able to call anything without proximity
limitation.

The practical reason to relax this restriction on JIT memory is that 128MB of
JIT memory can be quickly exhausted, especially where PAGE_SIZE is 64KB - one
page is needed per program. In cases where seccomp filters are applied to
multiple VMs on VM launch - such filters are classic BPF but converted to
BPF - this can severely limit the number of VMs that can be launched. In a
world where we support BPF JIT always on, turning off the JIT isn't always an
option either.

Fixes: 91fc957c9b ("arm64/bpf: don't allocate BPF JIT programs in module memory")
Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Russell King <russell.king@oracle.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Tested-by: Alan Maguire <alan.maguire@oracle.com>
Link: https://lore.kernel.org/bpf/1636131046-5982-2-git-send-email-alan.maguire@oracle.com
2021-11-08 22:16:26 +01:00
Alexei Starovoitov 0cc78dcca3 Merge branch 'introduce bpf_find_vma'
Song Liu says:

====================

Changes v4 => v5:
1. Clean up and style change in 2/2. (Andrii)

Changes v3 => v4:
1. Move mmap_unlock_work to task_iter.c to fix build for .config without
   !CONFIG_PERF_EVENTS. (kernel test robot <lkp@intel.com>)

Changes v2 => v3:
1. Avoid using x86 only function in selftests. (Yonghong)
2. Add struct file and struct vm_area_struct to btf_task_struct_ids, and
   use it in bpf_find_vma and stackmap.c. (Yonghong)
3. Fix inaccurate comments. (Yonghong)

Changes v1 => v2:
1. Share irq_work with stackmap.c. (Daniel)
2. Add tests for illegal writes to task/vma from the callback function.
   (Daniel)
3. Other small fixes.

Add helper bpf_find_vma. This can be used in some profiling use cases. It
might also be useful for LSM.
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2021-11-07 11:54:51 -08:00
Song Liu f108662b27 selftests/bpf: Add tests for bpf_find_vma
Add tests for bpf_find_vma in perf_event program and kprobe program. The
perf_event program is triggered from NMI context, so the second call of
bpf_find_vma() will return -EBUSY (irq_work busy). The kprobe program,
on the other hand, does not have this constraint.

Also add tests for illegal writes to task or vma from the callback
function. The verifier should reject both cases.

Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20211105232330.1936330-3-songliubraving@fb.com
2021-11-07 11:54:51 -08:00
Song Liu 7c7e3d31e7 bpf: Introduce helper bpf_find_vma
In some profiler use cases, it is necessary to map an address to the
backing file, e.g., a shared library. bpf_find_vma helper provides a
flexible way to achieve this. bpf_find_vma maps an address of a task to
the vma (vm_area_struct) for this address, and feed the vma to an callback
BPF function. The callback function is necessary here, as we need to
ensure mmap_sem is unlocked.

It is necessary to lock mmap_sem for find_vma. To lock and unlock mmap_sem
safely when irqs are disable, we use the same mechanism as stackmap with
build_id. Specifically, when irqs are disabled, the unlocked is postponed
in an irq_work. Refactor stackmap.c so that the irq_work is shared among
bpf_find_vma and stackmap helpers.

Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Tested-by: Hengqi Chen <hengqi.chen@gmail.com>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20211105232330.1936330-2-songliubraving@fb.com
2021-11-07 11:54:51 -08:00
Alexei Starovoitov 5fd79ed9be Merge branch 'Fix leaks in libbpf and selftests'
Andrii Nakryiko says:

====================

Fix all the memory leaks reported by ASAN. All but one are just improper
resource clean up in selftests. But one memory leak was discovered in libbpf,
leaving inner map's name leaked.

First patch fixes selftests' Makefile by passing through SAN_CFLAGS to linker.
Without that compiling with SAN_CFLAGS=-fsanitize=address kept failing.

Running selftests under ASAN in BPF CI is the next step, we just need to make
sure all the necessary libraries (libasan and liblsan) are installed on the
host and inside the VM. Would be great to get some help with that, but for now
make sure that test_progs run is clean from leak sanitizer errors.

v3->v4:
  - rebase on latest bpf-next;
v2->v3:
  - fix per-cpu array memory leaks in btf_iter.c selftests (Hengqi);
v1->v2:
  - call bpf_map__destroy() conditionally if map->inner_map is present.
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2021-11-07 09:14:16 -08:00
Andrii Nakryiko 8c7a955201 selftests/bpf: Fix bpf_object leak in skb_ctx selftest
skb_ctx selftest didn't close bpf_object implicitly allocated by
bpf_prog_test_load() helper. Fix the problem by explicitly calling
bpf_object__close() at the end of the test.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Hengqi Chen <hengqi.chen@gmail.com>
Link: https://lore.kernel.org/bpf/20211107165521.9240-10-andrii@kernel.org
2021-11-07 09:14:15 -08:00
Andrii Nakryiko f91231eeee selftests/bpf: Destroy XDP link correctly
bpf_link__detach() was confused with bpf_link__destroy() and leaves
leaked FD in the process. Fix the problem.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Hengqi Chen <hengqi.chen@gmail.com>
Link: https://lore.kernel.org/bpf/20211107165521.9240-9-andrii@kernel.org
2021-11-07 09:14:15 -08:00
Andrii Nakryiko f92321d706 selftests/bpf: Avoid duplicate btf__parse() call
btf__parse() is repeated after successful setup, leaving the first
instance leaked. Remove redundant and premature call.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Hengqi Chen <hengqi.chen@gmail.com>
Link: https://lore.kernel.org/bpf/20211107165521.9240-8-andrii@kernel.org
2021-11-07 09:14:15 -08:00
Andrii Nakryiko f79587520a selftests/bpf: Clean up btf and btf_dump in dump_datasec test
Free up used resources at the end and on error. Also make it more
obvious that there is btf__parse() call that creates struct btf
instance.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Hengqi Chen <hengqi.chen@gmail.com>
Link: https://lore.kernel.org/bpf/20211107165521.9240-7-andrii@kernel.org
2021-11-07 09:14:15 -08:00
Andrii Nakryiko 5309b516bc selftests/bpf: Free inner strings index in btf selftest
Inner array of allocated strings wasn't freed on success. Now it's
always freed.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Hengqi Chen <hengqi.chen@gmail.com>
Link: https://lore.kernel.org/bpf/20211107165521.9240-6-andrii@kernel.org
2021-11-07 09:14:15 -08:00
Andrii Nakryiko b8b26e585f selftests/bpf: Free per-cpu values array in bpf_iter selftest
Array holding per-cpu values wasn't freed. Fix that.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20211107165521.9240-5-andrii@kernel.org
2021-11-07 09:14:15 -08:00
Andrii Nakryiko 8ba2858749 selftests/bpf: Fix memory leaks in btf_type_c_dump() helper
Free up memory and resources used by temporary allocated memstream and
btf_dump instance.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Hengqi Chen <hengqi.chen@gmail.com>
Link: https://lore.kernel.org/bpf/20211107165521.9240-4-andrii@kernel.org
2021-11-07 09:14:15 -08:00
Andrii Nakryiko 8f7b239ea8 libbpf: Free up resources used by inner map definition
It's not enough to just free(map->inner_map), as inner_map itself can
have extra memory allocated, like map name.

Fixes: 646f02ffdd ("libbpf: Add BTF-defined map-in-map support")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Hengqi Chen <hengqi.chen@gmail.com>
Link: https://lore.kernel.org/bpf/20211107165521.9240-3-andrii@kernel.org
2021-11-07 09:14:15 -08:00
Andrii Nakryiko 2a2cb45b72 selftests/bpf: Pass sanitizer flags to linker through LDFLAGS
When adding -fsanitize=address to SAN_CFLAGS, it has to be passed both
to compiler through CFLAGS as well as linker through LDFLAGS. Add
SAN_CFLAGS into LDFLAGS to allow building selftests with ASAN.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Hengqi Chen <hengqi.chen@gmail.com>
Link: https://lore.kernel.org/bpf/20211107165521.9240-2-andrii@kernel.org
2021-11-07 09:14:15 -08:00
Alexei Starovoitov 5577f24cb0 Merge branch 'libbpf: add unified bpf_prog_load() low-level API'
Andrii Nakryiko says:

====================

This patch set adds unified OPTS-based low-level bpf_prog_load() API for
loading BPF programs directly into kernel without utilizing libbpf's
bpf_object abstractions. This OPTS-based interface allows for future
extensions without breaking backwards or forward API and ABI compatibility.
Similar approach will be used for other low-level APIs that require extensive
sets of parameters, like BPF_MAP_CREATE command.

First half of the patch set adds libbpf API, cleans up internal usage of
to-be-deprecated APIs, etc. Second half cleans up and converts selftests away
from using deprecated APIs. See individual patches for more details.

v1->v2:
  - dropped exposing sys_bpf() into public API (Alexei, Daniel);
  - also dropped bpftool/cgroup.c fix for unistd.h include because it's not
    necessary due to sys_bpf() staying as is.

Cc: Hengqi Chen <hengqi.chen@gmail.com>
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2021-11-07 08:34:24 -08:00
Andrii Nakryiko f19ddfe036 selftests/bpf: Use explicit bpf_test_load_program() helper calls
Remove the second part of prog loading testing helper re-definition:

  -Dbpf_load_program=bpf_test_load_program

This completes the clean up of deprecated libbpf program loading APIs.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Dave Marchevsky <davemarchevsky@fb.com>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-13-andrii@kernel.org
2021-11-07 08:34:24 -08:00
Andrii Nakryiko cbdb1461dc selftests/bpf: Use explicit bpf_prog_test_load() calls everywhere
-Dbpf_prog_load_deprecated=bpf_prog_test_load trick is both ugly and
breaks when deprecation goes into effect due to macro magic. Convert all
the uses to explicit bpf_prog_test_load() calls which avoid deprecation
errors and makes everything less magical.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Dave Marchevsky <davemarchevsky@fb.com>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-12-andrii@kernel.org
2021-11-07 08:34:23 -08:00
Andrii Nakryiko f87c1930ac selftests/bpf: Merge test_stub.c into testing_helpers.c
Move testing prog and object load wrappers (bpf_prog_test_load and
bpf_test_load_program) into testing_helpers.{c,h} and get rid of
otherwise useless test_stub.c. Make testing_helpers.c available to
non-test_progs binaries as well.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Dave Marchevsky <davemarchevsky@fb.com>
Link: https://lore.kernel.org/bpf/20211103220845.2676888-11-andrii@kernel.org
2021-11-07 08:34:23 -08:00