From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A7E3C4332F for ; Wed, 12 Oct 2022 14:03:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230046AbiJLOD5 (ORCPT ); Wed, 12 Oct 2022 10:03:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41552 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229999AbiJLODn (ORCPT ); Wed, 12 Oct 2022 10:03:43 -0400 Received: from mail-io1-f69.google.com (mail-io1-f69.google.com [209.85.166.69]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1986DC8237 for ; Wed, 12 Oct 2022 07:03:41 -0700 (PDT) Received: by mail-io1-f69.google.com with SMTP id z6-20020a6be206000000b006bbebf8f872so6818029ioc.2 for ; Wed, 12 Oct 2022 07:03:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=to:from:subject:message-id:date:mime-version:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=LWdHKT9leiqvoEOdN/2hOOqnqF5BTDepurZraEYGZ/E=; b=SGPItzGJbh1kJ5afshc+e0DhG9+WXprg20TllBWajLK8IXIiwajPcnTG164I6X19CH ztcqYW8+xSkybkUzB0NMIGT91T6kv1mFABdGfWfWfLdb9PBAUxfiKe4XfLR3+Srahmco lrcQXfRqeEOFPfdRe4zCP4uFgcGE59mENKH0oA/G5WMBP84WFRlvSnYXKD5cpDFpj4gD 4FWIJpBFv66DpfxqJ+Ff1zq9pfky+/q0cfnQiZ7gShUoXg3BMaqbybB2hGJsV0oXOa6F CfhsykdZGNeyZF3on+UlAYd61OXtTfug5+OjtUHUNBf0geFJcMYIq8HDH4FN2bjmS4HZ WEVw== X-Gm-Message-State: ACrzQf3r7S2HE0A+V60+4lELBtpiovG/S8UB0ld5o2Wk2AP39HzvAJpr H4fG7GJ/I9wQgTNj6y5YqE5/vRBehc/OZhjUB8spmgIH9jQk X-Google-Smtp-Source: AMsMyM6+mci7CiMGFOSoPKpcBGvBPlA2ZzygnkM7KvsXzaUCMSOUy7LNa9tykfS4y7djtZgVf/wYxy4Hi+PVSdXEoWlGqnXdOpQW MIME-Version: 1.0 X-Received: by 2002:a02:94ab:0:b0:35a:d1b9:c71c with SMTP id x40-20020a0294ab000000b0035ad1b9c71cmr15224587jah.310.1665583419955; Wed, 12 Oct 2022 07:03:39 -0700 (PDT) Date: Wed, 12 Oct 2022 07:03:39 -0700 X-Google-Appengine-App-Id: s~syzkaller X-Google-Appengine-App-Id-Alias: syzkaller Message-ID: <00000000000017227f05ead6dc15@google.com> Subject: [syzbot] possible deadlock in tcp_sock_set_cork From: syzbot To: bpf@vger.kernel.org, davem@davemloft.net, dsahern@kernel.org, edumazet@google.com, kuba@kernel.org, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, nathan@kernel.org, ndesaulniers@google.com, netdev@vger.kernel.org, pabeni@redhat.com, syzkaller-bugs@googlegroups.com, trix@redhat.com, yoshfuji@linux-ipv6.org Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hello, syzbot found the following issue on: HEAD commit: a5088ee7251e Merge tag 'thermal-6.1-rc1' of git://git.kern.. git tree: upstream console output: https://syzkaller.appspot.com/x/log.txt?x=17c929b8880000 kernel config: https://syzkaller.appspot.com/x/.config?x=201ae572239e648 dashboard link: https://syzkaller.appspot.com/bug?extid=c4b21407c3b1dc66ee65 compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2 Unfortunately, I don't have any reproducer for this issue yet. Downloadable assets: disk image: https://storage.googleapis.com/syzbot-assets/26341f70ccb8/disk-a5088ee7.raw.xz vmlinux: https://storage.googleapis.com/syzbot-assets/ca8a6f6b0303/vmlinux-a5088ee7.xz IMPORTANT: if you fix the issue, please add the following tag to the commit: Reported-by: syzbot+c4b21407c3b1dc66ee65@syzkaller.appspotmail.com ====================================================== WARNING: possible circular locking dependency detected 6.0.0-syzkaller-00372-ga5088ee7251e #0 Not tainted ------------------------------------------------------ kworker/u4:27/14295 is trying to acquire lock: ffff888022948fb0 (k-sk_lock-AF_INET6){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1712 [inline] ffff888022948fb0 (k-sk_lock-AF_INET6){+.+.}-{0:0}, at: tcp_sock_set_cork+0x16/0x90 net/ipv4/tcp.c:3337 but task is already holding lock: ffffc90004a4fda8 ((work_completion)(&(&cp->cp_send_w)->work)){+.+.}-{0:0}, at: process_one_work+0x8ae/0x1610 kernel/workqueue.c:2264 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 ((work_completion)(&(&cp->cp_send_w)->work)){+.+.}-{0:0}: __flush_work+0x105/0xae0 kernel/workqueue.c:3069 __cancel_work_timer+0x3f9/0x570 kernel/workqueue.c:3160 rds_tcp_reset_callbacks+0x1cb/0x4d0 net/rds/tcp.c:171 rds_tcp_accept_one+0x9d5/0xd10 net/rds/tcp_listen.c:203 rds_tcp_accept_worker+0x55/0x80 net/rds/tcp.c:529 process_one_work+0x991/0x1610 kernel/workqueue.c:2289 worker_thread+0x665/0x1080 kernel/workqueue.c:2436 kthread+0x2e4/0x3a0 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306 -> #0 (k-sk_lock-AF_INET6){+.+.}-{0:0}: check_prev_add kernel/locking/lockdep.c:3095 [inline] check_prevs_add kernel/locking/lockdep.c:3214 [inline] validate_chain kernel/locking/lockdep.c:3829 [inline] __lock_acquire+0x2a43/0x56d0 kernel/locking/lockdep.c:5053 lock_acquire kernel/locking/lockdep.c:5666 [inline] lock_acquire+0x1ab/0x570 kernel/locking/lockdep.c:5631 lock_sock_nested+0x36/0xf0 net/core/sock.c:3393 lock_sock include/net/sock.h:1712 [inline] tcp_sock_set_cork+0x16/0x90 net/ipv4/tcp.c:3337 rds_send_xmit+0x386/0x2540 net/rds/send.c:194 rds_send_worker+0x92/0x2e0 net/rds/threads.c:200 process_one_work+0x991/0x1610 kernel/workqueue.c:2289 worker_thread+0x665/0x1080 kernel/workqueue.c:2436 kthread+0x2e4/0x3a0 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306 other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock((work_completion)(&(&cp->cp_send_w)->work)); lock(k-sk_lock-AF_INET6); lock((work_completion)(&(&cp->cp_send_w)->work)); lock(k-sk_lock-AF_INET6); *** DEADLOCK *** 2 locks held by kworker/u4:27/14295: #0: ffff888027f19938 ((wq_completion)krdsd){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888027f19938 ((wq_completion)krdsd){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff888027f19938 ((wq_completion)krdsd){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline] #0: ffff888027f19938 ((wq_completion)krdsd){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:636 [inline] #0: ffff888027f19938 ((wq_completion)krdsd){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:663 [inline] #0: ffff888027f19938 ((wq_completion)krdsd){+.+.}-{0:0}, at: process_one_work+0x87a/0x1610 kernel/workqueue.c:2260 #1: ffffc90004a4fda8 ((work_completion)(&(&cp->cp_send_w)->work)){+.+.}-{0:0}, at: process_one_work+0x8ae/0x1610 kernel/workqueue.c:2264 stack backtrace: CPU: 0 PID: 14295 Comm: kworker/u4:27 Not tainted 6.0.0-syzkaller-00372-ga5088ee7251e #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/22/2022 Workqueue: krdsd rds_send_worker Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106 check_noncircular+0x25f/0x2e0 kernel/locking/lockdep.c:2175 check_prev_add kernel/locking/lockdep.c:3095 [inline] check_prevs_add kernel/locking/lockdep.c:3214 [inline] validate_chain kernel/locking/lockdep.c:3829 [inline] __lock_acquire+0x2a43/0x56d0 kernel/locking/lockdep.c:5053 lock_acquire kernel/locking/lockdep.c:5666 [inline] lock_acquire+0x1ab/0x570 kernel/locking/lockdep.c:5631 lock_sock_nested+0x36/0xf0 net/core/sock.c:3393 lock_sock include/net/sock.h:1712 [inline] tcp_sock_set_cork+0x16/0x90 net/ipv4/tcp.c:3337 rds_send_xmit+0x386/0x2540 net/rds/send.c:194 rds_send_worker+0x92/0x2e0 net/rds/threads.c:200 process_one_work+0x991/0x1610 kernel/workqueue.c:2289 worker_thread+0x665/0x1080 kernel/workqueue.c:2436 kthread+0x2e4/0x3a0 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306 --- This report is generated by a bot. It may contain errors. See https://goo.gl/tpsmEJ for more information about syzbot. syzbot engineers can be reached at syzkaller@googlegroups.com. syzbot will keep track of this issue. See: https://goo.gl/tpsmEJ#status for how to communicate with syzbot.