WSL2-Linux-Kernel/kernel/bpf
Kumar Kartikeya Dwivedi cf353904a8 bpf: Detect IP == ksym.end as part of BPF program
[ Upstream commit 66d9111f3517f85ef2af0337ece02683ce0faf21 ]

Now that bpf_throw kfunc is the first such call instruction that has
noreturn semantics within the verifier, this also kicks in dead code
elimination in unprecedented ways. For one, any instruction following
a bpf_throw call will never be marked as seen. Moreover, if a callchain
ends up throwing, any instructions after the call instruction to the
eventually throwing subprog in callers will also never be marked as
seen.

The tempting way to fix this would be to emit extra 'int3' instructions
which bump the jited_len of a program, and ensure that during runtime
when a program throws, we can discover its boundaries even if the call
instruction to bpf_throw (or to subprogs that always throw) is emitted
as the final instruction in the program.

An example of such a program would be this:

do_something():
	...
	r0 = 0
	exit

foo():
	r1 = 0
	call bpf_throw
	r0 = 0
	exit

bar(cond):
	if r1 != 0 goto pc+2
	call do_something
	exit
	call foo
	r0 = 0  // Never seen by verifier
	exit	//

main(ctx):
	r1 = ...
	call bar
	r0 = 0
	exit

Here, if we do end up throwing, the stacktrace would be the following:

bpf_throw
foo
bar
main

In bar, the final instruction emitted will be the call to foo, as such,
the return address will be the subsequent instruction (which the JIT
emits as int3 on x86). This will end up lying outside the jited_len of
the program, thus, when unwinding, we will fail to discover the return
address as belonging to any program and end up in a panic due to the
unreliable stack unwinding of BPF programs that we never expect.

To remedy this case, make bpf_prog_ksym_find treat IP == ksym.end as
part of the BPF program, so that is_bpf_text_address returns true when
such a case occurs, and we are able to unwind reliably when the final
instruction ends up being a call instruction.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/20230912233214.1518551-12-memxor@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-11-28 16:56:15 +00:00
..
preload libbpf: Move BPF_SEQ_PRINTF and BPF_SNPRINTF to bpf_helpers.h 2021-05-26 10:45:41 -07:00
Kconfig sock_map: Relax config dependency to CONFIG_NET 2021-07-15 18:17:49 -07:00
Makefile bpf: Enable task local storage for tracing programs 2021-02-26 11:51:47 -08:00
arraymap.c bpf: Acquire map uref in .init_seq_private for array map iterator 2022-08-25 11:40:03 +02:00
bpf_inode_storage.c bpf: Fix spelling mistakes 2021-05-24 21:13:05 -07:00
bpf_iter.c bpf: Refactor BPF_PROG_RUN into a function 2021-08-17 00:45:07 +02:00
bpf_local_storage.c bpf: Annotate data races in bpf_local_storage 2023-05-24 17:36:44 +01:00
bpf_lru_list.c bpf_lru_list: Read double-checked variable once without lock 2021-02-10 15:54:26 -08:00
bpf_lru_list.h bpf: Fix a typo "inacitve" -> "inactive" 2020-04-06 21:54:10 +02:00
bpf_lsm.c Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next 2021-06-17 11:54:56 -07:00
bpf_struct_ops.c bpf: Handle return value of BPF_PROG_TYPE_STRUCT_OPS prog 2021-09-14 11:09:50 -07:00
bpf_struct_ops_types.h bpf: tcp: Support tcp_congestion_ops in bpf 2020-01-09 08:46:18 -08:00
bpf_task_storage.c bpf: Use this_cpu_{inc|dec|inc_return} for bpf_task_storage_busy 2022-10-26 12:34:41 +02:00
btf.c bpf/btf: Accept function names that contain dots 2023-06-28 10:29:49 +02:00
cgroup.c bpf: Don't EFAULT for {g,s}setsockopt with wrong optlen 2023-07-23 13:46:49 +02:00
core.c bpf: Detect IP == ksym.end as part of BPF program 2023-11-28 16:56:15 +00:00
cpumap.c bpf, cpumap: Make sure kthread is running before map update returns 2023-08-11 15:13:58 +02:00
devmap.c bpf, devmap: Exclude XDP broadcast to master device 2021-08-09 23:25:14 +02:00
disasm.c bpf: Relicense disassembler as GPL-2.0-only OR BSD-2-Clause 2021-09-02 14:49:23 +02:00
disasm.h bpf: Relicense disassembler as GPL-2.0-only OR BSD-2-Clause 2021-09-02 14:49:23 +02:00
dispatcher.c bpf: Remove bpf_image tree 2020-03-13 12:49:52 -07:00
hashtab.c bpf: fix a memory leak in the LRU and LRU_PERCPU hash maps 2023-06-05 09:21:14 +02:00
helpers.c bpf: Check map->usercnt after timer->timer is assigned 2023-11-20 11:08:28 +01:00
inode.c bpf: Fix mount source show for bpffs 2022-01-27 11:05:26 +01:00
local_storage.c bpf: Increase supported cgroup storage value size 2021-07-27 15:59:29 -07:00
lpm_trie.c bpf: Allow RCU-protected lookups to happen from bh context 2021-06-24 19:41:15 +02:00
map_in_map.c bpf: Remember BTF of inner maps. 2021-07-15 22:31:10 +02:00
map_in_map.h bpf: Add map_meta_equal map ops 2020-08-28 15:41:30 +02:00
map_iter.c bpf: Introduce MEM_RDONLY flag 2022-05-01 17:22:24 +02:00
net_namespace.c bpf: Add support for forced LINK_DETACH command 2020-08-01 20:38:28 -07:00
offload.c bpf: restore the ebpf program ID for BPF_AUDIT_UNLOAD and PERF_BPF_EVENT_PROG_UNLOAD 2023-01-24 07:22:46 +01:00
percpu_freelist.c bpf: Initialize same number of free nodes for each pcpu_freelist 2022-11-26 09:24:38 +01:00
percpu_freelist.h bpf: Use raw_spin_trylock() for pcpu_freelist_push/pop in NMI 2020-10-06 00:04:11 +02:00
prog_iter.c bpf: Refactor bpf_iter_reg to have separate seq_info member 2020-07-25 20:16:32 -07:00
queue_stack_maps.c bpf: Avoid deadlock when using queue and stack maps from NMI 2023-10-06 13:18:04 +02:00
reuseport_array.c bpf: Fix spelling mistakes 2021-05-24 21:13:05 -07:00
ringbuf.c bpf: Add MEM_RDONLY for helper args that are pointers to rdonly mem. 2022-05-01 17:22:26 +02:00
stackmap.c bpf: Fix excessive memory allocation in stack_map_alloc() 2022-06-06 08:43:42 +02:00
syscall.c bpf: restore the ebpf program ID for BPF_AUDIT_UNLOAD and PERF_BPF_EVENT_PROG_UNLOAD 2023-01-24 07:22:46 +01:00
sysfs_btf.c bpf: Load and verify kernel module BTFs 2020-11-10 15:25:53 -08:00
task_iter.c bpf: Consolidate task_struct BTF_ID declarations 2021-08-25 10:37:05 -07:00
tnum.c bpf, tnums: Provably sound, faster, and more precise algorithm for tnum_mul 2021-06-01 13:34:15 +02:00
trampoline.c bpf: Fix potential array overflow in bpf_trampoline_get_progs() 2022-06-06 08:43:42 +02:00
verifier.c bpf: Fix verifier log for async callback return values 2023-10-19 23:05:34 +02:00