* [syzbot] [kvm?] [net?] [virt?] WARNING in virtio_transport_send_pkt_info (2) @ 2025-11-28 1:44 syzbot 2025-11-28 8:41 ` [PATCH Next] net: restore the iterator to its original state when an error occurs Edward Adam Davis 0 siblings, 1 reply; 7+ messages in thread From: syzbot @ 2025-11-28 1:44 UTC (permalink / raw) To: davem, edumazet, eperezma, horms, jasowang, kuba, kvm, linux-kernel, mst, netdev, pabeni, sgarzare, stefanha, syzkaller-bugs, virtualization, xuanzhuo Hello, syzbot found the following issue on: HEAD commit: d724c6f85e80 Add linux-next specific files for 20251121 git tree: linux-next console output: https://syzkaller.appspot.com/x/log.txt?x=12920f42580000 kernel config: https://syzkaller.appspot.com/x/.config?x=763fb984aa266726 dashboard link: https://syzkaller.appspot.com/bug?extid=28e5f3d207b14bae122a compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8 syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1458797c580000 C reproducer: https://syzkaller.appspot.com/x/repro.c?x=15afd612580000 Downloadable assets: disk image: https://storage.googleapis.com/syzbot-assets/b2f349c65e3c/disk-d724c6f8.raw.xz vmlinux: https://storage.googleapis.com/syzbot-assets/aba40ae987ce/vmlinux-d724c6f8.xz kernel image: https://storage.googleapis.com/syzbot-assets/0b98fbfe576f/bzImage-d724c6f8.xz IMPORTANT: if you fix the issue, please add the following tag to the commit: Reported-by: syzbot+28e5f3d207b14bae122a@syzkaller.appspotmail.com ------------[ cut here ]------------ 'send_pkt()' returns 0, but 4096 expected WARNING: net/vmw_vsock/virtio_transport_common.c:430 at virtio_transport_send_pkt_info+0xd1e/0xef0 net/vmw_vsock/virtio_transport_common.c:428, CPU#1: syz.0.17/5986 Modules linked in: CPU: 1 UID: 0 PID: 5986 Comm: syz.0.17 Not tainted syzkaller #0 PREEMPT(full) Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025 RIP: 0010:virtio_transport_send_pkt_info+0xd1e/0xef0 net/vmw_vsock/virtio_transport_common.c:428 Code: f6 90 0f 0b 90 e9 d7 f7 ff ff e8 5d cc 7c f6 c6 05 c6 5f 64 04 01 90 48 c7 c7 60 da b4 8c 44 89 f6 48 89 ea e8 13 eb 3e f6 90 <0f> 0b 90 90 eb 9e 89 d9 80 e1 07 80 c1 03 38 c1 0f 8c 0a f3 ff ff RSP: 0018:ffffc900033a7508 EFLAGS: 00010246 RAX: 2383e4149a9d5400 RBX: 0000000000001000 RCX: ffff88807b361e80 RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000000000002 RBP: 0000000000001000 R08: 0000000000000003 R09: 0000000000000004 R10: dffffc0000000000 R11: fffffbfff1c3a720 R12: 0000000000040000 R13: dffffc0000000000 R14: 0000000000000000 R15: ffffc900033a7640 FS: 00005555677a5500(0000) GS:ffff888125b6f000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000200000030000 CR3: 0000000075f06000 CR4: 00000000003526f0 Call Trace: <TASK> virtio_transport_stream_enqueue net/vmw_vsock/virtio_transport_common.c:1113 [inline] virtio_transport_seqpacket_enqueue+0x143/0x1c0 net/vmw_vsock/virtio_transport_common.c:841 vsock_connectible_sendmsg+0xabf/0x1040 net/vmw_vsock/af_vsock.c:2158 sock_sendmsg_nosec net/socket.c:727 [inline] __sock_sendmsg+0x21c/0x270 net/socket.c:746 ____sys_sendmsg+0x52d/0x870 net/socket.c:2634 ___sys_sendmsg+0x21f/0x2a0 net/socket.c:2688 __sys_sendmmsg+0x227/0x430 net/socket.c:2777 __do_sys_sendmmsg net/socket.c:2804 [inline] __se_sys_sendmmsg net/socket.c:2801 [inline] __x64_sys_sendmmsg+0xa0/0xc0 net/socket.c:2801 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f1e5218f749 Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007fffbe398ef8 EFLAGS: 00000246 ORIG_RAX: 0000000000000133 RAX: ffffffffffffffda RBX: 00007f1e523e5fa0 RCX: 00007f1e5218f749 RDX: 0000000000000001 RSI: 0000200000000100 RDI: 0000000000000004 RBP: 00007f1e52213f91 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000024008094 R11: 0000000000000246 R12: 0000000000000000 R13: 00007f1e523e5fa0 R14: 00007f1e523e5fa0 R15: 0000000000000004 </TASK> --- This report is generated by a bot. It may contain errors. See https://goo.gl/tpsmEJ for more information about syzbot. syzbot engineers can be reached at syzkaller@googlegroups.com. syzbot will keep track of this issue. See: https://goo.gl/tpsmEJ#status for how to communicate with syzbot. If the report is already addressed, let syzbot know by replying with: #syz fix: exact-commit-title If you want syzbot to run the reproducer, reply with: #syz test: git://repo/address.git branch-or-commit-hash If you attach or paste a git patch, syzbot will apply it before testing. If you want to overwrite report's subsystems, reply with: #syz set subsystems: new-subsystem (See the list of subsystem names on the web dashboard) If the report is a duplicate of another one, reply with: #syz dup: exact-subject-of-another-report If you want to undo deduplication, reply with: #syz undup ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH Next] net: restore the iterator to its original state when an error occurs 2025-11-28 1:44 [syzbot] [kvm?] [net?] [virt?] WARNING in virtio_transport_send_pkt_info (2) syzbot @ 2025-11-28 8:41 ` Edward Adam Davis 2025-11-28 13:05 ` [syzbot ci] " syzbot ci 0 siblings, 1 reply; 7+ messages in thread From: Edward Adam Davis @ 2025-11-28 8:41 UTC (permalink / raw) To: syzbot+28e5f3d207b14bae122a Cc: davem, edumazet, eperezma, horms, jasowang, kuba, kvm, linux-kernel, mst, netdev, pabeni, sgarzare, stefanha, syzkaller-bugs, virtualization, xuanzhuo In zerocopy_fill_skb_from_iter(), if two copy operations are performed and the first one succeeds while the second one fails, it returns a failure but the count in iterator has already been decremented due to the first successful copy. This ultimately affects the local variable rest_len in virtio_transport_send_pkt_info(), causing the remaining count in rest_len to be greater than the actual iterator count. As a result, packet sending operations continue even when the iterator count is zero, which further leads to skb->len being 0 and triggers the warning reported by syzbot [1]. Therefore, if the zerocopy operation fails, we should revert the iterator to its original state. [1] 'send_pkt()' returns 0, but 4096 expected WARNING: net/vmw_vsock/virtio_transport_common.c:430 at virtio_transport_send_pkt_info+0xd1e/0xef0 net/vmw_vsock/virtio_transport_common.c:428, CPU#1: syz.0.17/5986 Call Trace: virtio_transport_stream_enqueue net/vmw_vsock/virtio_transport_common.c:1113 [inline] virtio_transport_seqpacket_enqueue+0x143/0x1c0 net/vmw_vsock/virtio_transport_common.c:841 vsock_connectible_sendmsg+0xabf/0x1040 net/vmw_vsock/af_vsock.c:2158 sock_sendmsg_nosec net/socket.c:727 [inline] __sock_sendmsg+0x21c/0x270 net/socket.c:746 Reported-by: syzbot+28e5f3d207b14bae122a@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=28e5f3d207b14bae122a Tested-by: syzbot+28e5f3d207b14bae122a@syzkaller.appspotmail.com Signed-off-by: Edward Adam Davis <eadavis@qq.com> --- net/core/datagram.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/net/core/datagram.c b/net/core/datagram.c index c285c6465923..da10465cd8a4 100644 --- a/net/core/datagram.c +++ b/net/core/datagram.c @@ -748,10 +748,13 @@ int __zerocopy_sg_from_iter(struct msghdr *msg, struct sock *sk, size_t length, struct net_devmem_dmabuf_binding *binding) { + struct iov_iter_state state; unsigned long orig_size = skb->truesize; unsigned long truesize; int ret; + iov_iter_save_state(from, &state); + if (msg && msg->msg_ubuf && msg->sg_from_iter) ret = msg->sg_from_iter(skb, from, length); else if (binding) @@ -759,6 +762,9 @@ int __zerocopy_sg_from_iter(struct msghdr *msg, struct sock *sk, else ret = zerocopy_fill_skb_from_iter(skb, from, length); + if (ret) + iov_iter_restore(from, &state); + truesize = skb->truesize - orig_size; if (sk && sk->sk_type == SOCK_STREAM) { sk_wmem_queued_add(sk, truesize); -- 2.43.0 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* [syzbot ci] Re: net: restore the iterator to its original state when an error occurs 2025-11-28 8:41 ` [PATCH Next] net: restore the iterator to its original state when an error occurs Edward Adam Davis @ 2025-11-28 13:05 ` syzbot ci 2025-11-28 13:35 ` [PATCH Next V2] " Edward Adam Davis 0 siblings, 1 reply; 7+ messages in thread From: syzbot ci @ 2025-11-28 13:05 UTC (permalink / raw) To: davem, eadavis, edumazet, eperezma, horms, jasowang, kuba, kvm, linux-kernel, mst, netdev, pabeni, sgarzare, stefanha, syzbot, syzkaller-bugs, virtualization, xuanzhuo Cc: syzbot, syzkaller-bugs syzbot ci has tested the following series [v1] net: restore the iterator to its original state when an error occurs https://lore.kernel.org/all/tencent_387517772566B03DBD365896C036264AA809@qq.com * [PATCH Next] net: restore the iterator to its original state when an error occurs and found the following issues: * KASAN: slab-out-of-bounds Read in iov_iter_revert * KASAN: stack-out-of-bounds Read in iov_iter_revert Full report is available here: https://ci.syzbot.org/series/b5c506f4-f657-428b-bd21-8d50aedef42c *** KASAN: slab-out-of-bounds Read in iov_iter_revert tree: net-next URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/netdev/net-next.git base: db4029859d6fd03f0622d394f4cdb1be86d7ec62 arch: amd64 compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8 config: https://ci.syzbot.org/builds/253e310d-d693-4611-8760-36e2b39c0752/config syz repro: https://ci.syzbot.org/findings/1bbe297c-62ec-4071-9df3-d1c80a2bb758/syz_repro ================================================================== BUG: KASAN: slab-out-of-bounds in iov_iter_revert+0x4d5/0x5f0 lib/iov_iter.c:645 Read of size 8 at addr ffff888112061ff8 by task syz.1.18/5997 CPU: 0 UID: 0 PID: 5997 Comm: syz.1.18 Not tainted syzkaller #0 PREEMPT(full) Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Call Trace: <TASK> dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120 print_address_description mm/kasan/report.c:378 [inline] print_report+0xca/0x240 mm/kasan/report.c:482 kasan_report+0x118/0x150 mm/kasan/report.c:595 iov_iter_revert+0x4d5/0x5f0 lib/iov_iter.c:645 skb_zerocopy_iter_stream+0x27d/0x660 net/core/skbuff.c:1911 tcp_sendmsg_locked+0x1815/0x5540 net/ipv4/tcp.c:1300 tcp_sendmsg+0x2f/0x50 net/ipv4/tcp.c:1412 sock_sendmsg_nosec net/socket.c:727 [inline] __sock_sendmsg+0x19c/0x270 net/socket.c:742 ____sys_sendmsg+0x52d/0x830 net/socket.c:2630 ___sys_sendmsg+0x21f/0x2a0 net/socket.c:2684 __sys_sendmmsg+0x227/0x430 net/socket.c:2773 __do_sys_sendmmsg net/socket.c:2800 [inline] __se_sys_sendmmsg net/socket.c:2797 [inline] __x64_sys_sendmmsg+0xa0/0xc0 net/socket.c:2797 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f6942f8f749 Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f6943d8a038 EFLAGS: 00000246 ORIG_RAX: 0000000000000133 RAX: ffffffffffffffda RBX: 00007f69431e5fa0 RCX: 00007f6942f8f749 RDX: 0000000000000004 RSI: 0000200000000d00 RDI: 0000000000000003 RBP: 00007f6943013f91 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000004000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007f69431e6038 R14: 00007f69431e5fa0 R15: 00007fff3f790f38 </TASK> Allocated by task 5913: kasan_save_stack mm/kasan/common.c:56 [inline] kasan_save_track+0x3e/0x80 mm/kasan/common.c:77 poison_kmalloc_redzone mm/kasan/common.c:400 [inline] __kasan_kmalloc+0x93/0xb0 mm/kasan/common.c:417 kasan_kmalloc include/linux/kasan.h:262 [inline] __do_kmalloc_node mm/slub.c:5650 [inline] __kmalloc_noprof+0x411/0x7f0 mm/slub.c:5662 kmalloc_noprof include/linux/slab.h:961 [inline] kzalloc_noprof include/linux/slab.h:1094 [inline] ip6t_alloc_initial_table+0x6b/0x6d0 net/ipv6/netfilter/ip6_tables.c:40 ip6table_security_table_init+0x1b/0x70 net/ipv6/netfilter/ip6table_security.c:42 xt_find_table_lock+0x30c/0x3e0 net/netfilter/x_tables.c:1260 xt_request_find_table_lock+0x26/0x100 net/netfilter/x_tables.c:1285 get_info net/ipv6/netfilter/ip6_tables.c:979 [inline] do_ip6t_get_ctl+0x730/0x1180 net/ipv6/netfilter/ip6_tables.c:1668 nf_getsockopt+0x26e/0x290 net/netfilter/nf_sockopt.c:116 ipv6_getsockopt+0x1ed/0x290 net/ipv6/ipv6_sockglue.c:1473 do_sock_getsockopt+0x372/0x450 net/socket.c:2421 __sys_getsockopt net/socket.c:2450 [inline] __do_sys_getsockopt net/socket.c:2457 [inline] __se_sys_getsockopt net/socket.c:2454 [inline] __x64_sys_getsockopt+0x1a5/0x250 net/socket.c:2454 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f Freed by task 5913: kasan_save_stack mm/kasan/common.c:56 [inline] kasan_save_track+0x3e/0x80 mm/kasan/common.c:77 __kasan_save_free_info+0x46/0x50 mm/kasan/generic.c:587 kasan_save_free_info mm/kasan/kasan.h:406 [inline] poison_slab_object mm/kasan/common.c:252 [inline] __kasan_slab_free+0x5c/0x80 mm/kasan/common.c:284 kasan_slab_free include/linux/kasan.h:234 [inline] slab_free_hook mm/slub.c:2543 [inline] slab_free mm/slub.c:6642 [inline] kfree+0x19a/0x6d0 mm/slub.c:6849 ip6table_security_table_init+0x4b/0x70 net/ipv6/netfilter/ip6table_security.c:46 xt_find_table_lock+0x30c/0x3e0 net/netfilter/x_tables.c:1260 xt_request_find_table_lock+0x26/0x100 net/netfilter/x_tables.c:1285 get_info net/ipv6/netfilter/ip6_tables.c:979 [inline] do_ip6t_get_ctl+0x730/0x1180 net/ipv6/netfilter/ip6_tables.c:1668 nf_getsockopt+0x26e/0x290 net/netfilter/nf_sockopt.c:116 ipv6_getsockopt+0x1ed/0x290 net/ipv6/ipv6_sockglue.c:1473 do_sock_getsockopt+0x372/0x450 net/socket.c:2421 __sys_getsockopt net/socket.c:2450 [inline] __do_sys_getsockopt net/socket.c:2457 [inline] __se_sys_getsockopt net/socket.c:2454 [inline] __x64_sys_getsockopt+0x1a5/0x250 net/socket.c:2454 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f The buggy address belongs to the object at ffff888112061800 which belongs to the cache kmalloc-1k of size 1024 The buggy address is located 1016 bytes to the right of allocated 1024-byte region [ffff888112061800, ffff888112061c00) The buggy address belongs to the physical page: page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x112060 head: order:3 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0 flags: 0x17ff00000000040(head|node=0|zone=2|lastcpupid=0x7ff) page_type: f5(slab) raw: 017ff00000000040 ffff888100041dc0 dead000000000122 0000000000000000 raw: 0000000000000000 0000000000100010 00000000f5000000 0000000000000000 head: 017ff00000000040 ffff888100041dc0 dead000000000122 0000000000000000 head: 0000000000000000 0000000000100010 00000000f5000000 0000000000000000 head: 017ff00000000003 ffffea0004481801 00000000ffffffff 00000000ffffffff head: ffffffffffffffff 0000000000000000 00000000ffffffff 0000000000000008 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 3, migratetype Unmovable, gfp_mask 0xd20c0(__GFP_IO|__GFP_FS|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC), pid 5913, tgid 5913 (syz-executor), ts 67648433769, free_ts 67644331621 set_page_owner include/linux/page_owner.h:32 [inline] post_alloc_hook+0x234/0x290 mm/page_alloc.c:1845 prep_new_page mm/page_alloc.c:1853 [inline] get_page_from_freelist+0x2365/0x2440 mm/page_alloc.c:3879 __alloc_frozen_pages_noprof+0x181/0x370 mm/page_alloc.c:5178 alloc_pages_mpol+0x232/0x4a0 mm/mempolicy.c:2416 alloc_slab_page mm/slub.c:3059 [inline] allocate_slab+0x96/0x350 mm/slub.c:3232 new_slab mm/slub.c:3286 [inline] ___slab_alloc+0xf56/0x1990 mm/slub.c:4655 __slab_alloc+0x65/0x100 mm/slub.c:4778 __slab_alloc_node mm/slub.c:4854 [inline] slab_alloc_node mm/slub.c:5276 [inline] __do_kmalloc_node mm/slub.c:5649 [inline] __kmalloc_noprof+0x471/0x7f0 mm/slub.c:5662 kmalloc_noprof include/linux/slab.h:961 [inline] kzalloc_noprof include/linux/slab.h:1094 [inline] ipt_alloc_initial_table+0x6b/0x6a0 net/ipv4/netfilter/ip_tables.c:36 iptable_security_table_init+0x1b/0x70 net/ipv4/netfilter/iptable_security.c:43 xt_find_table_lock+0x30c/0x3e0 net/netfilter/x_tables.c:1260 xt_request_find_table_lock+0x26/0x100 net/netfilter/x_tables.c:1285 get_info net/ipv4/netfilter/ip_tables.c:963 [inline] do_ipt_get_ctl+0x730/0x1180 net/ipv4/netfilter/ip_tables.c:1659 nf_getsockopt+0x26e/0x290 net/netfilter/nf_sockopt.c:116 ip_getsockopt+0x1c4/0x220 net/ipv4/ip_sockglue.c:1777 do_sock_getsockopt+0x372/0x450 net/socket.c:2421 page last free pid 5913 tgid 5913 stack trace: reset_page_owner include/linux/page_owner.h:25 [inline] free_pages_prepare mm/page_alloc.c:1394 [inline] __free_frozen_pages+0xbc4/0xd30 mm/page_alloc.c:2901 __slab_free+0x2e7/0x390 mm/slub.c:5970 qlink_free mm/kasan/quarantine.c:163 [inline] qlist_free_all+0x97/0x140 mm/kasan/quarantine.c:179 kasan_quarantine_reduce+0x148/0x160 mm/kasan/quarantine.c:286 __kasan_slab_alloc+0x22/0x80 mm/kasan/common.c:352 kasan_slab_alloc include/linux/kasan.h:252 [inline] slab_post_alloc_hook mm/slub.c:4978 [inline] slab_alloc_node mm/slub.c:5288 [inline] kmem_cache_alloc_noprof+0x367/0x6e0 mm/slub.c:5295 getname_flags+0xb8/0x540 fs/namei.c:146 getname include/linux/fs.h:2924 [inline] do_sys_openat2+0xbc/0x1c0 fs/open.c:1431 do_sys_open fs/open.c:1452 [inline] __do_sys_openat fs/open.c:1468 [inline] __se_sys_openat fs/open.c:1463 [inline] __x64_sys_openat+0x138/0x170 fs/open.c:1463 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f Memory state around the buggy address: ffff888112061e80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ffff888112061f00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc >ffff888112061f80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ^ ffff888112062000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ffff888112062080: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ================================================================== *** KASAN: stack-out-of-bounds Read in iov_iter_revert tree: net-next URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/netdev/net-next.git base: db4029859d6fd03f0622d394f4cdb1be86d7ec62 arch: amd64 compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8 config: https://ci.syzbot.org/builds/253e310d-d693-4611-8760-36e2b39c0752/config C repro: https://ci.syzbot.org/findings/be09fb4c-b087-441e-a7d7-eb8da4f7a000/c_repro syz repro: https://ci.syzbot.org/findings/be09fb4c-b087-441e-a7d7-eb8da4f7a000/syz_repro ================================================================== BUG: KASAN: stack-out-of-bounds in iov_iter_revert+0x4d5/0x5f0 lib/iov_iter.c:645 Read of size 8 at addr ffffc90003847b58 by task syz.0.17/5946 CPU: 0 UID: 0 PID: 5946 Comm: syz.0.17 Not tainted syzkaller #0 PREEMPT(full) Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Call Trace: <TASK> dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120 print_address_description mm/kasan/report.c:378 [inline] print_report+0xca/0x240 mm/kasan/report.c:482 kasan_report+0x118/0x150 mm/kasan/report.c:595 iov_iter_revert+0x4d5/0x5f0 lib/iov_iter.c:645 skb_zerocopy_iter_stream+0x27d/0x660 net/core/skbuff.c:1911 tcp_sendmsg_locked+0x1815/0x5540 net/ipv4/tcp.c:1300 tcp_sendmsg+0x2f/0x50 net/ipv4/tcp.c:1412 sock_sendmsg_nosec net/socket.c:727 [inline] __sock_sendmsg+0x19c/0x270 net/socket.c:742 ____sys_sendmsg+0x52d/0x830 net/socket.c:2630 ___sys_sendmsg+0x21f/0x2a0 net/socket.c:2684 __sys_sendmmsg+0x227/0x430 net/socket.c:2773 __do_sys_sendmmsg net/socket.c:2800 [inline] __se_sys_sendmmsg net/socket.c:2797 [inline] __x64_sys_sendmmsg+0xa0/0xc0 net/socket.c:2797 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7fe7c078f749 Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007fe7c1647038 EFLAGS: 00000246 ORIG_RAX: 0000000000000133 RAX: ffffffffffffffda RBX: 00007fe7c09e5fa0 RCX: 00007fe7c078f749 RDX: 0000000000000004 RSI: 0000200000000d00 RDI: 0000000000000003 RBP: 00007fe7c0813f91 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000004000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007fe7c09e6038 R14: 00007fe7c09e5fa0 R15: 00007ffd19e95ea8 </TASK> The buggy address belongs to stack of task syz.0.17/5946 and is located at offset 280 in frame: ___sys_sendmsg+0x0/0x2a0 net/socket.c:2713 This frame has 4 objects: [32, 88) 'msg.i.i' [128, 256) 'address' [288, 416) 'iovstack' [448, 456) 'iov' The buggy address belongs to a 8-page vmalloc region starting at 0xffffc90003840000 allocated at copy_process+0x54b/0x3c00 kernel/fork.c:2012 The buggy address belongs to the physical page: page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1135fc memcg:ffff88810c5ca102 flags: 0x17ff00000000000(node=0|zone=2|lastcpupid=0x7ff) raw: 017ff00000000000 0000000000000000 dead000000000122 0000000000000000 raw: 0000000000000000 0000000000000000 00000001ffffffff ffff88810c5ca102 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 0, migratetype Unmovable, gfp_mask 0x2dc2(GFP_KERNEL|__GFP_HIGHMEM|__GFP_ZERO|__GFP_NOWARN), pid 5869, tgid 5869 (syz-executor), ts 56973482428, free_ts 56803199363 set_page_owner include/linux/page_owner.h:32 [inline] post_alloc_hook+0x234/0x290 mm/page_alloc.c:1845 prep_new_page mm/page_alloc.c:1853 [inline] get_page_from_freelist+0x2365/0x2440 mm/page_alloc.c:3879 __alloc_frozen_pages_noprof+0x181/0x370 mm/page_alloc.c:5178 alloc_pages_mpol+0x232/0x4a0 mm/mempolicy.c:2416 alloc_frozen_pages_noprof mm/mempolicy.c:2487 [inline] alloc_pages_noprof+0xa9/0x190 mm/mempolicy.c:2507 vm_area_alloc_pages mm/vmalloc.c:3647 [inline] __vmalloc_area_node mm/vmalloc.c:3724 [inline] __vmalloc_node_range_noprof+0x96c/0x12d0 mm/vmalloc.c:3897 __vmalloc_node_noprof+0xc2/0x110 mm/vmalloc.c:3960 alloc_thread_stack_node kernel/fork.c:311 [inline] dup_task_struct+0x3d4/0x830 kernel/fork.c:881 copy_process+0x54b/0x3c00 kernel/fork.c:2012 kernel_clone+0x21e/0x840 kernel/fork.c:2609 __do_sys_clone kernel/fork.c:2750 [inline] __se_sys_clone kernel/fork.c:2734 [inline] __x64_sys_clone+0x18b/0x1e0 kernel/fork.c:2734 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f page last free pid 5845 tgid 5845 stack trace: reset_page_owner include/linux/page_owner.h:25 [inline] free_pages_prepare mm/page_alloc.c:1394 [inline] __free_frozen_pages+0xbc4/0xd30 mm/page_alloc.c:2901 kasan_depopulate_vmalloc_pte+0x6d/0x90 mm/kasan/shadow.c:495 apply_to_pte_range mm/memory.c:3143 [inline] apply_to_pmd_range mm/memory.c:3187 [inline] apply_to_pud_range mm/memory.c:3223 [inline] apply_to_p4d_range mm/memory.c:3259 [inline] __apply_to_page_range+0xb66/0x13d0 mm/memory.c:3295 kasan_release_vmalloc+0xa2/0xd0 mm/kasan/shadow.c:616 kasan_release_vmalloc_node mm/vmalloc.c:2255 [inline] purge_vmap_node+0x214/0x8f0 mm/vmalloc.c:2272 __purge_vmap_area_lazy+0x7a4/0xb40 mm/vmalloc.c:2362 drain_vmap_area_work+0x27/0x40 mm/vmalloc.c:2396 process_one_work kernel/workqueue.c:3263 [inline] process_scheduled_works+0xae1/0x17b0 kernel/workqueue.c:3346 worker_thread+0x8a0/0xda0 kernel/workqueue.c:3427 kthread+0x711/0x8a0 kernel/kthread.c:463 ret_from_fork+0x4bc/0x870 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 Memory state around the buggy address: ffffc90003847a00: 00 00 00 00 00 00 00 00 f1 f1 f1 f1 00 00 00 00 ffffc90003847a80: 00 00 00 f2 f2 f2 f2 f2 00 00 00 00 00 00 00 00 >ffffc90003847b00: 00 00 00 00 00 00 00 00 f2 f2 f2 f2 00 00 00 00 ^ ffffc90003847b80: 00 00 00 00 00 00 00 00 00 00 00 00 f2 f2 f2 f2 ffffc90003847c00: 00 f3 f3 f3 00 00 00 00 00 00 00 00 00 00 00 00 ================================================================== *** If these findings have caused you to resend the series or submit a separate fix, please add the following tag to your commit message: Tested-by: syzbot@syzkaller.appspotmail.com --- This report is generated by a bot. It may contain errors. syzbot ci engineers can be reached at syzkaller@googlegroups.com. ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH Next V2] net: restore the iterator to its original state when an error occurs 2025-11-28 13:05 ` [syzbot ci] " syzbot ci @ 2025-11-28 13:35 ` Edward Adam Davis 2025-11-28 17:39 ` Jakub Kicinski 0 siblings, 1 reply; 7+ messages in thread From: Edward Adam Davis @ 2025-11-28 13:35 UTC (permalink / raw) To: syzbot+ci3edb9412aeb2e703 Cc: davem, eadavis, edumazet, eperezma, horms, jasowang, kuba, kvm, linux-kernel, mst, netdev, pabeni, sgarzare, stefanha, syzbot, syzbot, syzkaller-bugs, virtualization, xuanzhuo In zerocopy_fill_skb_from_iter(), if two copy operations are performed and the first one succeeds while the second one fails, it returns a failure but the count in iterator has already been decremented due to the first successful copy. This ultimately affects the local variable rest_len in virtio_transport_send_pkt_info(), causing the remaining count in rest_len to be greater than the actual iterator count. As a result, packet sending operations continue even when the iterator count is zero, which further leads to skb->len being 0 and triggers the warning reported by syzbot [1]. Therefore, if the zerocopy operation fails, we should revert the iterator to its original state. The iov_iter_revert() in skb_zerocopy_iter_stream() is no longer needed and has been removed. [1] 'send_pkt()' returns 0, but 4096 expected WARNING: net/vmw_vsock/virtio_transport_common.c:430 at virtio_transport_send_pkt_info+0xd1e/0xef0 net/vmw_vsock/virtio_transport_common.c:428, CPU#1: syz.0.17/5986 Call Trace: virtio_transport_stream_enqueue net/vmw_vsock/virtio_transport_common.c:1113 [inline] virtio_transport_seqpacket_enqueue+0x143/0x1c0 net/vmw_vsock/virtio_transport_common.c:841 vsock_connectible_sendmsg+0xabf/0x1040 net/vmw_vsock/af_vsock.c:2158 sock_sendmsg_nosec net/socket.c:727 [inline] __sock_sendmsg+0x21c/0x270 net/socket.c:746 Reported-by: syzbot+28e5f3d207b14bae122a@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=28e5f3d207b14bae122a Tested-by: syzbot+28e5f3d207b14bae122a@syzkaller.appspotmail.com Signed-off-by: Edward Adam Davis <eadavis@qq.com> --- V1 -> V2: Remove iov_iter_revert() in skb_zerocopy_iter_stream() net/core/datagram.c | 6 ++++++ net/core/skbuff.c | 1 - 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/net/core/datagram.c b/net/core/datagram.c index c285c6465923..da10465cd8a4 100644 --- a/net/core/datagram.c +++ b/net/core/datagram.c @@ -748,10 +748,13 @@ int __zerocopy_sg_from_iter(struct msghdr *msg, struct sock *sk, size_t length, struct net_devmem_dmabuf_binding *binding) { + struct iov_iter_state state; unsigned long orig_size = skb->truesize; unsigned long truesize; int ret; + iov_iter_save_state(from, &state); + if (msg && msg->msg_ubuf && msg->sg_from_iter) ret = msg->sg_from_iter(skb, from, length); else if (binding) @@ -759,6 +762,9 @@ int __zerocopy_sg_from_iter(struct msghdr *msg, struct sock *sk, else ret = zerocopy_fill_skb_from_iter(skb, from, length); + if (ret) + iov_iter_restore(from, &state); + truesize = skb->truesize - orig_size; if (sk && sk->sk_type == SOCK_STREAM) { sk_wmem_queued_add(sk, truesize); diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 5a1d123e7ef7..77ed045c28ff 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -1908,7 +1908,6 @@ int skb_zerocopy_iter_stream(struct sock *sk, struct sk_buff *skb, struct sock *save_sk = skb->sk; /* Streams do not free skb on error. Reset to prev state. */ - iov_iter_revert(&msg->msg_iter, skb->len - orig_len); skb->sk = sk; ___pskb_trim(skb, orig_len); skb->sk = save_sk; -- 2.43.0 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH Next V2] net: restore the iterator to its original state when an error occurs 2025-11-28 13:35 ` [PATCH Next V2] " Edward Adam Davis @ 2025-11-28 17:39 ` Jakub Kicinski 2025-12-01 3:41 ` Edward Adam Davis 0 siblings, 1 reply; 7+ messages in thread From: Jakub Kicinski @ 2025-11-28 17:39 UTC (permalink / raw) To: Edward Adam Davis Cc: syzbot+ci3edb9412aeb2e703, davem, edumazet, eperezma, horms, jasowang, kvm, linux-kernel, mst, netdev, pabeni, sgarzare, stefanha, syzbot, syzbot, syzkaller-bugs, virtualization, xuanzhuo On Fri, 28 Nov 2025 21:35:57 +0800 Edward Adam Davis wrote: > In zerocopy_fill_skb_from_iter(), if two copy operations are performed > and the first one succeeds while the second one fails, it returns a > failure but the count in iterator has already been decremented due to > the first successful copy. This ultimately affects the local variable > rest_len in virtio_transport_send_pkt_info(), causing the remaining > count in rest_len to be greater than the actual iterator count. As a > result, packet sending operations continue even when the iterator count > is zero, which further leads to skb->len being 0 and triggers the warning > reported by syzbot [1]. Please follow the subsystem guidelines for posting patches: https://www.kernel.org/doc/html/next/process/maintainer-netdev.html Your patch breaks zerocopy tests. ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH Next V2] net: restore the iterator to its original state when an error occurs 2025-11-28 17:39 ` Jakub Kicinski @ 2025-12-01 3:41 ` Edward Adam Davis 2025-12-01 19:15 ` Jakub Kicinski 0 siblings, 1 reply; 7+ messages in thread From: Edward Adam Davis @ 2025-12-01 3:41 UTC (permalink / raw) To: kuba Cc: davem, eadavis, edumazet, eperezma, horms, jasowang, kvm, linux-kernel, mst, netdev, pabeni, sgarzare, stefanha, syzbot+ci3edb9412aeb2e703, syzbot, syzbot, syzkaller-bugs, virtualization, xuanzhuo On Fri, 28 Nov 2025 09:39:46 -0800, Jakub Kicinski wrote: > > In zerocopy_fill_skb_from_iter(), if two copy operations are performed > > and the first one succeeds while the second one fails, it returns a > > failure but the count in iterator has already been decremented due to > > the first successful copy. This ultimately affects the local variable > > rest_len in virtio_transport_send_pkt_info(), causing the remaining > > count in rest_len to be greater than the actual iterator count. As a > > result, packet sending operations continue even when the iterator count > > is zero, which further leads to skb->len being 0 and triggers the warning > > reported by syzbot [1]. > > Please follow the subsystem guidelines for posting patches: > https://www.kernel.org/doc/html/next/process/maintainer-netdev.html > Your patch breaks zerocopy tests. I see that they all timed out. I'm not familiar with this test, how can I get more details about it? ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH Next V2] net: restore the iterator to its original state when an error occurs 2025-12-01 3:41 ` Edward Adam Davis @ 2025-12-01 19:15 ` Jakub Kicinski 0 siblings, 0 replies; 7+ messages in thread From: Jakub Kicinski @ 2025-12-01 19:15 UTC (permalink / raw) To: Edward Adam Davis Cc: davem, edumazet, eperezma, horms, jasowang, kvm, linux-kernel, mst, netdev, pabeni, sgarzare, stefanha, syzbot+ci3edb9412aeb2e703, syzbot, syzbot, syzkaller-bugs, virtualization, xuanzhuo On Mon, 1 Dec 2025 11:41:07 +0800 Edward Adam Davis wrote: > On Fri, 28 Nov 2025 09:39:46 -0800, Jakub Kicinski wrote: > > > In zerocopy_fill_skb_from_iter(), if two copy operations are performed > > > and the first one succeeds while the second one fails, it returns a > > > failure but the count in iterator has already been decremented due to > > > the first successful copy. This ultimately affects the local variable > > > rest_len in virtio_transport_send_pkt_info(), causing the remaining > > > count in rest_len to be greater than the actual iterator count. As a > > > result, packet sending operations continue even when the iterator count > > > is zero, which further leads to skb->len being 0 and triggers the warning > > > reported by syzbot [1]. > > > > Please follow the subsystem guidelines for posting patches: > > https://www.kernel.org/doc/html/next/process/maintainer-netdev.html > > Your patch breaks zerocopy tests. > I see that they all timed out. I'm not familiar with this test, how can > I get more details about it? IIRC its was the packetdrill tests: tools/testing/selftests/net/packetdrill/tcp_fastopen_server_basic-zero-payload.pkt tools/testing/selftests/net/packetdrill/tcp_zerocopy_basic.pkt tools/testing/selftests/net/packetdrill/tcp_zerocopy_batch.pkt tools/testing/selftests/net/packetdrill/tcp_zerocopy_client.pkt tools/testing/selftests/net/packetdrill/tcp_zerocopy_closed.pkt tools/testing/selftests/net/packetdrill/tcp_zerocopy_epoll_edge.pkt tools/testing/selftests/net/packetdrill/tcp_zerocopy_epoll_exclusive.pkt tools/testing/selftests/net/packetdrill/tcp_zerocopy_epoll_oneshot.pkt tools/testing/selftests/net/packetdrill/tcp_zerocopy_fastopen-client.pkt tools/testing/selftests/net/packetdrill/tcp_zerocopy_fastopen-server.pkt tools/testing/selftests/net/packetdrill/tcp_zerocopy_maxfrags.pkt tools/testing/selftests/net/packetdrill/tcp_zerocopy_small.pkt If you have the packetdrill command installed those _should_ be relatively easy to run via standard kselftest commands ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2025-12-01 19:15 UTC | newest] Thread overview: 7+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2025-11-28 1:44 [syzbot] [kvm?] [net?] [virt?] WARNING in virtio_transport_send_pkt_info (2) syzbot 2025-11-28 8:41 ` [PATCH Next] net: restore the iterator to its original state when an error occurs Edward Adam Davis 2025-11-28 13:05 ` [syzbot ci] " syzbot ci 2025-11-28 13:35 ` [PATCH Next V2] " Edward Adam Davis 2025-11-28 17:39 ` Jakub Kicinski 2025-12-01 3:41 ` Edward Adam Davis 2025-12-01 19:15 ` Jakub Kicinski
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).