From: Takashi Iwai <tiwai@suse.de>
To: Juefei Pu <juefei.pu@email.ucr.edu>
Cc: perex@perex.cz, tiwai@suse.com, linux-sound@vger.kernel.org,
linux-kernel@vger.kernel.org
Subject: Re: BUG: INFO: task hung in seq_free_client
Date: Sun, 25 Aug 2024 17:54:32 +0200 [thread overview]
Message-ID: <87v7zoionb.wl-tiwai@suse.de> (raw)
In-Reply-To: <CANikGpe4pbGxQV+CCvSF6U+4cGHLWBzY_WXGSV9m+prBE6tYEQ@mail.gmail.com>
On Sun, 25 Aug 2024 01:08:32 +0200,
Juefei Pu wrote:
>
> Hello,
> We found the following issue using syzkaller on Linux v6.10.
> In `seq_free_client`, the task hangs when trying to acquire lock
> `register_mutex`.
>
> Unfortunately, the syzkaller failed to generate a reproducer.
> But at least we have the report:
Unfortunately the stack trace isn't really helpful. It merely means
that something else already blocked the register_mutex, and it's
likely that something went wrong at releasing another client, hence
it's keeping the mutex. But I can't say more than that the info for
now.
In anyway, if this is reproducible and have a reproducer, please
report again.
thanks,
Takashi
>
> INFO: task syz.0.38:8767 blocked for more than 143 seconds.
> Not tainted 6.10.0 #13
> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> task:syz.0.38 state:D stack:24680 pid:8767 tgid:8767
> ppid:8050 flags:0x00000004
> Call Trace:
> <TASK>
> context_switch kernel/sched/core.c:5407 [inline]
> __schedule+0xf4a/0x15e0 kernel/sched/core.c:6748
> __schedule_loop kernel/sched/core.c:6825 [inline]
> schedule+0x143/0x310 kernel/sched/core.c:6840
> schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6897
> __mutex_lock_common kernel/locking/mutex.c:684 [inline]
> __mutex_lock+0x69a/0xd50 kernel/locking/mutex.c:752
> seq_free_client+0x27/0x180 sound/core/seq/seq_clientmgr.c:298
> snd_seq_release+0x44/0xc0 sound/core/seq/seq_clientmgr.c:387
> __fput+0x24a/0x8a0 fs/file_table.c:422
> task_work_run+0x239/0x2f0 kernel/task_work.c:180
> resume_user_mode_work include/linux/resume_user_mode.h:50 [inline]
> exit_to_user_mode_loop kernel/entry/common.c:114 [inline]
> exit_to_user_mode_prepare include/linux/entry-common.h:328 [inline]
> __syscall_exit_to_user_mode_work kernel/entry/common.c:207 [inline]
> syscall_exit_to_user_mode+0x12d/0x280 kernel/entry/common.c:218
> do_syscall_64+0x8a/0x150 arch/x86/entry/common.c:89
> entry_SYSCALL_64_after_hwframe+0x67/0x6f
> RIP: 0033:0x7f9a0cf809b9
> RSP: 002b:00007ffc31b75f88 EFLAGS: 00000246 ORIG_RAX: 00000000000001b4
> RAX: 0000000000000000 RBX: 00007f9a0d147a80 RCX: 00007f9a0cf809b9
> RDX: 0000000000000000 RSI: 000000000000001e RDI: 0000000000000003
> RBP: 00007f9a0d147a80 R08: 0000000000000006 R09: 00007ffc31b7626f
> R10: 00000000003ffcd0 R11: 0000000000000246 R12: 00000000000248af
> R13: 00007ffc31b76080 R14: 00007ffc31b760a0 R15: ffffffffffffffff
> </TASK>
>
> Showing all locks held in the system:
> 3 locks held by kworker/0:1/10:
> #0: ffff88801307a948 ((wq_completion)events){+.+.}-{0:0}, at:
> process_one_work kernel/workqueue.c:3223 [inline]
> #0: ffff88801307a948 ((wq_completion)events){+.+.}-{0:0}, at:
> process_scheduled_works+0x8fb/0x1410 kernel/workqueue.c:3329
> #1: ffffc900000cfd20
> ((work_completion)(&data->fib_event_work)){+.+.}-{0:0}, at:
> process_one_work kernel/workqueue.c:3224 [inline]
> #1: ffffc900000cfd20
> ((work_completion)(&data->fib_event_work)){+.+.}-{0:0}, at:
> process_scheduled_works+0x922/0x1410 kernel/workqueue.c:3329
> #2: ffff888100af7240 (&data->fib_lock){+.+.}-{3:3}, at:
> nsim_fib_event_work+0x2de/0x4050 drivers/net/netdevsim/fib.c:1489
> 1 lock held by khungtaskd/25:
> #0: ffffffff8db32fe0 (rcu_read_lock){....}-{1:2}, at:
> rcu_lock_acquire include/linux/rcupdate.h:329 [inline]
> #0: ffffffff8db32fe0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock
> include/linux/rcupdate.h:781 [inline]
> #0: ffffffff8db32fe0 (rcu_read_lock){....}-{1:2}, at:
> debug_show_all_locks+0x54/0x2d0 kernel/locking/lockdep.c:6614
> 2 locks held by kworker/u4:9/2825:
> 1 lock held by systemd-udevd/4506:
> 1 lock held by in:imklog/7643:
> 1 lock held by syz.1.37/8761:
> #0: ffffffff8ecc0b08 (register_mutex#3){+.+.}-{3:3}, at:
> seq_free_client+0x27/0x180 sound/core/seq/seq_clientmgr.c:298
> 1 lock held by syz.0.38/8767:
> #0: ffffffff8ecc0b08 (register_mutex#3){+.+.}-{3:3}, at:
> seq_free_client+0x27/0x180 sound/core/seq/seq_clientmgr.c:298
> 1 lock held by syz.0.44/9437:
> #0: ffffffff8ecc0b08 (register_mutex#3){+.+.}-{3:3}, at:
> snd_seq_open+0x42/0x4a0 sound/core/seq/seq_clientmgr.c:335
> 1 lock held by syz.1.46/9506:
> #0: ffffffff8ecc0b08 (register_mutex#3){+.+.}-{3:3}, at:
> snd_seq_open+0x42/0x4a0 sound/core/seq/seq_clientmgr.c:335
> 1 lock held by syz.1.46/9508:
> #0: ffffffff8ecc0b08 (register_mutex#3){+.+.}-{3:3}, at:
> snd_seq_open+0x42/0x4a0 sound/core/seq/seq_clientmgr.c:335
> 1 lock held by syz.1.46/9509:
> #0: ffffffff8ecc0b08 (register_mutex#3){+.+.}-{3:3}, at:
> snd_seq_open+0x42/0x4a0 sound/core/seq/seq_clientmgr.c:335
> 1 lock held by syz.0.53/10021:
> #0: ffffffff8ecc0b08 (register_mutex#3){+.+.}-{3:3}, at:
> snd_seq_open+0x42/0x4a0 sound/core/seq/seq_clientmgr.c:335
> 1 lock held by syz.0.53/10022:
> #0: ffffffff8ecc0b08 (register_mutex#3){+.+.}-{3:3}, at:
> snd_seq_open+0x42/0x4a0 sound/core/seq/seq_clientmgr.c:335
> 1 lock held by syz.1.57/10085:
> #0: ffffffff8ecc0b08 (register_mutex#3){+.+.}-{3:3}, at:
> snd_seq_open+0x42/0x4a0 sound/core/seq/seq_clientmgr.c:335
> 1 lock held by syz.1.57/10091:
> #0: ffffffff8ecc0b08 (register_mutex#3){+.+.}-{3:3}, at:
> snd_seq_open+0x42/0x4a0 sound/core/seq/seq_clientmgr.c:335
> 1 lock held by syz.0.64/10607:
> #0: ffffffff8ecc0b08 (register_mutex#3){+.+.}-{3:3}, at:
> snd_seq_open+0x42/0x4a0 sound/core/seq/seq_clientmgr.c:335
> 1 lock held by syz.1.66/10653:
> #0: ffffffff8ecc0b08 (register_mutex#3){+.+.}-{3:3}, at:
> snd_seq_open+0x42/0x4a0 sound/core/seq/seq_clientmgr.c:335
> 1 lock held by syz.0.70/11162:
> #0: ffffffff8ecc0b08 (register_mutex#3){+.+.}-{3:3}, at:
> snd_seq_open+0x42/0x4a0 sound/core/seq/seq_clientmgr.c:335
> 1 lock held by syz.1.68/11179:
> #0: ffffffff8ecc0b08 (register_mutex#3){+.+.}-{3:3}, at:
> snd_seq_open+0x42/0x4a0 sound/core/seq/seq_clientmgr.c:335
> 1 lock held by syz.0.71/11691:
> #0: ffffffff8ecc0b08 (register_mutex#3){+.+.}-{3:3}, at:
> snd_seq_open+0x42/0x4a0 sound/core/seq/seq_clientmgr.c:335
> 1 lock held by syz.1.72/11705:
> #0: ffffffff8ecc0b08 (register_mutex#3){+.+.}-{3:3}, at:
> snd_seq_open+0x42/0x4a0 sound/core/seq/seq_clientmgr.c:335
> 1 lock held by syz.1.74/12211:
> #0: ffffffff8ecc0b08 (register_mutex#3){+.+.}-{3:3}, at:
> snd_seq_open+0x42/0x4a0 sound/core/seq/seq_clientmgr.c:335
> 1 lock held by syz.0.81/12259:
> #0: ffffffff8ecc0b08 (register_mutex#3){+.+.}-{3:3}, at:
> snd_seq_open+0x42/0x4a0 sound/core/seq/seq_clientmgr.c:335
> 2 locks held by syz-executor/12269:
> 1 lock held by systemd-udevd/12279:
> 2 locks held by syz-executor/12313:
> 1 lock held by syz.1.82/12748:
> #0: ffffffff8ecc0b08 (register_mutex#3){+.+.}-{3:3}, at:
> snd_seq_open+0x42/0x4a0 sound/core/seq/seq_clientmgr.c:335
> 1 lock held by syz.0.83/12763:
> #0: ffffffff8ecc0b08 (register_mutex#3){+.+.}-{3:3}, at:
> snd_seq_open+0x42/0x4a0 sound/core/seq/seq_clientmgr.c:335
prev parent reply other threads:[~2024-08-25 16:02 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-24 23:08 BUG: INFO: task hung in seq_free_client Juefei Pu
2024-08-25 15:54 ` Takashi Iwai [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87v7zoionb.wl-tiwai@suse.de \
--to=tiwai@suse.de \
--cc=juefei.pu@email.ucr.edu \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-sound@vger.kernel.org \
--cc=perex@perex.cz \
--cc=tiwai@suse.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox