WSL2-Linux-Kernel/arch/arm64/lib
Will Deacon 9a43563cfd arm64: csum: Fix OoB access in IP checksum code for negative lengths
commit 8bd795fedb upstream.

Although commit c2c24edb1d ("arm64: csum: Fix pathological zero-length
calls") added an early return for zero-length input, syzkaller has
popped up with an example of a _negative_ length which causes an
undefined shift and an out-of-bounds read:

 | BUG: KASAN: slab-out-of-bounds in do_csum+0x44/0x254 arch/arm64/lib/csum.c:39
 | Read of size 4294966928 at addr ffff0000d7ac0170 by task syz-executor412/5975
 |
 | CPU: 0 PID: 5975 Comm: syz-executor412 Not tainted 6.4.0-rc4-syzkaller-g908f31f2a05b #0
 | Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/25/2023
 | Call trace:
 |  dump_backtrace+0x1b8/0x1e4 arch/arm64/kernel/stacktrace.c:233
 |  show_stack+0x2c/0x44 arch/arm64/kernel/stacktrace.c:240
 |  __dump_stack lib/dump_stack.c:88 [inline]
 |  dump_stack_lvl+0xd0/0x124 lib/dump_stack.c:106
 |  print_address_description mm/kasan/report.c:351 [inline]
 |  print_report+0x174/0x514 mm/kasan/report.c:462
 |  kasan_report+0xd4/0x130 mm/kasan/report.c:572
 |  kasan_check_range+0x264/0x2a4 mm/kasan/generic.c:187
 |  __kasan_check_read+0x20/0x30 mm/kasan/shadow.c:31
 |  do_csum+0x44/0x254 arch/arm64/lib/csum.c:39
 |  csum_partial+0x30/0x58 lib/checksum.c:128
 |  gso_make_checksum include/linux/skbuff.h:4928 [inline]
 |  __udp_gso_segment+0xaf4/0x1bc4 net/ipv4/udp_offload.c:332
 |  udp6_ufo_fragment+0x540/0xca0 net/ipv6/udp_offload.c:47
 |  ipv6_gso_segment+0x5cc/0x1760 net/ipv6/ip6_offload.c:119
 |  skb_mac_gso_segment+0x2b4/0x5b0 net/core/gro.c:141
 |  __skb_gso_segment+0x250/0x3d0 net/core/dev.c:3401
 |  skb_gso_segment include/linux/netdevice.h:4859 [inline]
 |  validate_xmit_skb+0x364/0xdbc net/core/dev.c:3659
 |  validate_xmit_skb_list+0x94/0x130 net/core/dev.c:3709
 |  sch_direct_xmit+0xe8/0x548 net/sched/sch_generic.c:327
 |  __dev_xmit_skb net/core/dev.c:3805 [inline]
 |  __dev_queue_xmit+0x147c/0x3318 net/core/dev.c:4210
 |  dev_queue_xmit include/linux/netdevice.h:3085 [inline]
 |  packet_xmit+0x6c/0x318 net/packet/af_packet.c:276
 |  packet_snd net/packet/af_packet.c:3081 [inline]
 |  packet_sendmsg+0x376c/0x4c98 net/packet/af_packet.c:3113
 |  sock_sendmsg_nosec net/socket.c:724 [inline]
 |  sock_sendmsg net/socket.c:747 [inline]
 |  __sys_sendto+0x3b4/0x538 net/socket.c:2144

Extend the early return to reject negative lengths as well, aligning our
implementation with the generic code in lib/checksum.c

Cc: Robin Murphy <robin.murphy@arm.com>
Fixes: 5777eaed56 ("arm64: Implement optimised checksum routine")
Reported-by: syzbot+4a9f9820bd8d302e22f7@syzkaller.appspotmail.com
Link: https://lore.kernel.org/r/000000000000e0e94c0603f8d213@google.com
Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-09-19 12:22:51 +02:00
..
Makefile arch: remove compat_alloc_user_space 2021-09-08 15:32:35 -07:00
clear_page.S arm64: clear_page() shouldn't use DC ZVA when DCZID_EL0.DZP == 1 2022-01-27 11:03:28 +01:00
clear_user.S arm64: Rewrite __arch_clear_user() 2021-06-01 18:34:38 +01:00
copy_from_user.S arm64: Avoid premature usercopy failure 2021-07-15 17:29:14 +01:00
copy_page.S arm64: lib: Annotate {clear, copy}_page() as position-independent 2021-03-19 12:01:19 +00:00
copy_template.S treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 234 2019-06-19 17:09:07 +02:00
copy_to_user.S arm64: Avoid premature usercopy failure 2021-07-15 17:29:14 +01:00
crc32.S arm64: lib: Consistently enable crc32 extension 2020-04-28 14:36:32 +01:00
csum.c arm64: csum: Fix OoB access in IP checksum code for negative lengths 2023-09-19 12:22:51 +02:00
delay.c treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 234 2019-06-19 17:09:07 +02:00
error-inject.c arm64: Add support for function error injection 2019-08-07 13:53:09 +01:00
insn.c arm64: use __func__ to get function name in pr_err 2021-07-30 16:26:16 +01:00
kasan_sw_tags.S kasan: arm64: support specialized outlined tag mismatch checks 2021-05-26 23:31:26 +01:00
memchr.S arm64: Better optimised memchr() 2021-06-01 18:34:38 +01:00
memcmp.S arm64: update string routine copyrights and URLs 2021-06-02 17:58:26 +01:00
memcpy.S arm64: update string routine copyrights and URLs 2021-06-02 17:58:26 +01:00
memset.S arm64: Change .weak to SYM_FUNC_START_WEAK_PI for arch/arm64/lib/mem*.S 2020-10-30 08:32:31 +00:00
mte.S arm64: mte: DC {GVA,GZVA} shouldn't be used when DCZID_EL0.DZP == 1 2022-01-27 11:03:28 +01:00
strchr.S arm64: lib: Use modern annotations for assembly functions 2020-01-08 12:23:02 +00:00
strcmp.S arm64: Mitigate MTE issues with str{n}cmp() 2021-09-21 14:50:19 +01:00
strlen.S arm64: fix strlen() with CONFIG_KASAN_HW_TAGS 2021-07-12 13:36:22 +01:00
strncmp.S arm64: lib: Import latest version of Arm Optimized Routines' strncmp 2023-09-19 12:22:30 +02:00
strnlen.S arm64: lib: Use modern annotations for assembly functions 2020-01-08 12:23:02 +00:00
strrchr.S arm64: lib: Use modern annotations for assembly functions 2020-01-08 12:23:02 +00:00
tishift.S arm64: lib: Use modern annotations for assembly functions 2020-01-08 12:23:02 +00:00
uaccess_flushcache.c arm64: Rename arm64-internal cache maintenance functions 2021-05-25 19:27:49 +01:00
xor-neon.c treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500 2019-06-19 17:09:55 +02:00