* [PATCH] Revert "ceph: when filling trace, call ceph_get_inode outside of mutexes"
@ 2026-04-18 4:12 Li Lei
2026-04-23 19:50 ` Viacheslav Dubeyko
2026-04-27 19:57 ` Viacheslav Dubeyko
0 siblings, 2 replies; 9+ messages in thread
From: Li Lei @ 2026-04-18 4:12 UTC (permalink / raw)
To: slava, idryomov, amarkuze
Cc: ceph-devel, linux-kernel, noctis.akm, Li Lei, Zhao Sun
This reverts commit bca9fc14c70fcbbebc84954cc39994e463fb9468.
Deadlock detected between mdsc->snap_rwsem and the I_NEW bit in
handle_reply().
- kworker/u113:1 (stat inode)
1) Hold a inode with I_NEW set
2) Request for mdsc->snap_rwsem
- kworker/u113:2 (readdir)
1) Hold mdsc->snap_rwsem
2) Wait for inode I_NEW flag to be cleared
task:kworker/u113:1 state:D stack: 0 pid:34454 ppid: 2
flags:0x00004000
Workqueue: ceph-msgr ceph_con_workfn [libceph]
Call Trace:
__schedule+0x3a9/0x8d0
schedule+0x49/0xb0
rwsem_down_write_slowpath+0x30a/0x5e0
handle_reply+0x4d7/0x7f0 [ceph]
? ceph_tcp_recvmsg+0x6f/0xa0 [libceph]
mds_dispatch+0x10a/0x690 [ceph]
? calc_signature+0xdf/0x110 [libceph]
? ceph_x_check_message_signature+0x58/0xc0 [libceph]
ceph_con_process_message+0x73/0x140 [libceph]
ceph_con_v1_try_read+0x2f2/0x860 [libceph]
ceph_con_workfn+0x31e/0x660 [libceph]
process_one_work+0x1cb/0x370
worker_thread+0x30/0x390
? process_one_work+0x370/0x370
kthread+0x13e/0x160
? set_kthread_struct+0x50/0x50
ret_from_fork+0x1f/0x30
task:kworker/u113:2 state:D stack: 0 pid:54267 ppid: 2
flags:0x00004000
Workqueue: ceph-msgr ceph_con_workfn [libceph]
Call Trace:
__schedule+0x3a9/0x8d0
? bit_wait_io+0x60/0x60
? bit_wait_io+0x60/0x60
schedule+0x49/0xb0
bit_wait+0xd/0x60
__wait_on_bit+0x2a/0x90
? ceph_force_reconnect+0x90/0x90 [ceph]
out_of_line_wait_on_bit+0x91/0xb0
? bitmap_empty+0x20/0x20
ilookup5.part.29+0x69/0x90
? ceph_force_reconnect+0x90/0x90 [ceph]
? ceph_ino_compare+0x30/0x30 [ceph]
iget5_locked+0x26/0x90
ceph_get_inode+0x45/0x130 [ceph]
ceph_readdir_prepopulate+0x59f/0xca0 [ceph]
handle_reply+0x78d/0x7f0 [ceph]
? ceph_tcp_recvmsg+0x6f/0xa0 [libceph]
mds_dispatch+0x10a/0x690 [ceph]
? calc_signature+0xdf/0x110 [libceph]
? ceph_x_check_message_signature+0x58/0xc0 [libceph]
ceph_con_process_message+0x73/0x140 [libceph]
ceph_con_v1_try_read+0x2f2/0x860 [libceph]
ceph_con_workfn+0x31e/0x660 [libceph]
process_one_work+0x1cb/0x370
worker_thread+0x30/0x390
? process_one_work+0x370/0x370
kthread+0x13e/0x160
? set_kthread_struct+0x50/0x50
ret_from_fork+0x1f/0x30
It's rather rear to be caught, but here's Fast Reproduce Steps
(multiple mds is needed):
1. Try to find 2 different directories (DIR_a DIR_b) in a cephfs cluster
and make sure they have different auth mds nodes. In this way, a client
may have chances to run handle_reply on different CPU for our test
(see step 5 and step 6).
2. In DIR_b, create a hard link of DIR_a/FILE_a, namely FILE_b.
DIR_a/FILE_a and DIR_b/FILE_b have the same ino (123456 e.g)
3. Save ino in code below, make it sleep for stat command.
```
static void handle_reply(struct ceph_mds_session
*session, struct ceph_msg *msg)
goto out_err;
}
req->r_target_inode = in;
+ if (in->i_ino == 123456) {
+ pr_err("inode %lu found, ready to wait 10 seconds.\n",
+ in->i_ino);
+ msleep(10000);
+ }
```
4. Execute echo 3 > /proc/sys/vm/drop_caches
5. In a shell, do `cd DIR_a;stat DIR_a/FILE_a`, we suppose to be stuck on this shell
because of msleep() in handle_reply().
6. In the other shell, do `cd DIR_b;ls DIR_b/` to trigger ceph_readdir_prepopulate()
Repeat step 4-6, less than 10 times is enough to see the problem.
It turns out that commit bca9fc14c70f ("ceph: when filling trace, call ceph_get_inode outside of mutexes")
moved ceph_inode_get outside snap_rmsem and made a chance for the deadlock of ceph_inode_get()
and snap_rwsem.
After the following commit, original commit(bca9fc14c70f) can be reverted safely.
commit 6a92b08fdad2 ("ceph: don't take s_mutex or snap_rwsem in ceph_check_caps")
Signed-off-by: Zhao Sun <sunzhao03@kuaishou.com>
Signed-off-by: Li Lei <lilei24@kuaishou.com>
---
fs/ceph/inode.c | 26 ++++++++++++++++++++++----
fs/ceph/mds_client.c | 29 -----------------------------
2 files changed, 22 insertions(+), 33 deletions(-)
diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
index d99e12d..0c241a4 100644
--- a/fs/ceph/inode.c
+++ b/fs/ceph/inode.c
@@ -1667,10 +1667,28 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
}
if (rinfo->head->is_target) {
- /* Should be filled in by handle_reply */
- BUG_ON(!req->r_target_inode);
+ in = xchg(&req->r_new_inode, NULL);
+ tvino.ino = le64_to_cpu(rinfo->targeti.in->ino);
+ tvino.snap = le64_to_cpu(rinfo->targeti.in->snapid);
+
+ /*
+ * If we ended up opening an existing inode, discard
+ * r_new_inode
+ */
+ if (req->r_op == CEPH_MDS_OP_CREATE &&
+ !req->r_reply_info.has_create_ino) {
+ /* This should never happen on an async create */
+ WARN_ON_ONCE(req->r_deleg_ino);
+ iput(in);
+ in = NULL;
+ }
+
+ in = ceph_get_inode(fsc->sb, tvino, in);
+ if (IS_ERR(in)) {
+ err = PTR_ERR(in);
+ goto done;
+ }
- in = req->r_target_inode;
err = ceph_fill_inode(in, req->r_locked_page, &rinfo->targeti,
NULL, session,
(!test_bit(CEPH_MDS_R_ABORTED, &req->r_req_flags) &&
@@ -1680,13 +1698,13 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
if (err < 0) {
pr_err_client(cl, "badness %p %llx.%llx\n", in,
ceph_vinop(in));
- req->r_target_inode = NULL;
if (inode_state_read_once(in) & I_NEW)
discard_new_inode(in);
else
iput(in);
goto done;
}
+ req->r_target_inode = in;
if (inode_state_read_once(in) & I_NEW)
unlock_new_inode(in);
}
diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
index b174627..8a27775 100644
--- a/fs/ceph/mds_client.c
+++ b/fs/ceph/mds_client.c
@@ -3941,36 +3941,7 @@ static void handle_reply(struct ceph_mds_session *session, struct ceph_msg *msg)
session->s_con.peer_features);
mutex_unlock(&mdsc->mutex);
- /* Must find target inode outside of mutexes to avoid deadlocks */
rinfo = &req->r_reply_info;
- if ((err >= 0) && rinfo->head->is_target) {
- struct inode *in = xchg(&req->r_new_inode, NULL);
- struct ceph_vino tvino = {
- .ino = le64_to_cpu(rinfo->targeti.in->ino),
- .snap = le64_to_cpu(rinfo->targeti.in->snapid)
- };
-
- /*
- * If we ended up opening an existing inode, discard
- * r_new_inode
- */
- if (req->r_op == CEPH_MDS_OP_CREATE &&
- !req->r_reply_info.has_create_ino) {
- /* This should never happen on an async create */
- WARN_ON_ONCE(req->r_deleg_ino);
- iput(in);
- in = NULL;
- }
-
- in = ceph_get_inode(mdsc->fsc->sb, tvino, in);
- if (IS_ERR(in)) {
- err = PTR_ERR(in);
- mutex_lock(&session->s_mutex);
- goto out_err;
- }
- req->r_target_inode = in;
- }
-
mutex_lock(&session->s_mutex);
if (err < 0) {
pr_err_client(cl, "got corrupt reply mds%d(tid:%lld)\n",
--
1.8.3.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH] Revert "ceph: when filling trace, call ceph_get_inode outside of mutexes"
2026-04-18 4:12 [PATCH] Revert "ceph: when filling trace, call ceph_get_inode outside of mutexes" Li Lei
@ 2026-04-23 19:50 ` Viacheslav Dubeyko
2026-04-24 18:11 ` Viacheslav Dubeyko
2026-04-27 19:57 ` Viacheslav Dubeyko
1 sibling, 1 reply; 9+ messages in thread
From: Viacheslav Dubeyko @ 2026-04-23 19:50 UTC (permalink / raw)
To: lilei24@kuaishou.com, idryomov@gmail.com, slava@dubeyko.com,
Alex Markuze
Cc: ceph-devel@vger.kernel.org, noctis.akm@gmail.com,
linux-kernel@vger.kernel.org, sunzhao03@kuaishou.com
On Sat, 2026-04-18 at 12:12 +0800, Li Lei wrote:
> This reverts commit bca9fc14c70fcbbebc84954cc39994e463fb9468.
>
> Deadlock detected between mdsc->snap_rwsem and the I_NEW bit in
> handle_reply().
>
> - kworker/u113:1 (stat inode)
> 1) Hold a inode with I_NEW set
> 2) Request for mdsc->snap_rwsem
> - kworker/u113:2 (readdir)
> 1) Hold mdsc->snap_rwsem
> 2) Wait for inode I_NEW flag to be cleared
>
> task:kworker/u113:1 state:D stack: 0 pid:34454 ppid: 2
> flags:0x00004000
> Workqueue: ceph-msgr ceph_con_workfn [libceph]
> Call Trace:
> __schedule+0x3a9/0x8d0
> schedule+0x49/0xb0
> rwsem_down_write_slowpath+0x30a/0x5e0
> handle_reply+0x4d7/0x7f0 [ceph]
> ? ceph_tcp_recvmsg+0x6f/0xa0 [libceph]
> mds_dispatch+0x10a/0x690 [ceph]
> ? calc_signature+0xdf/0x110 [libceph]
> ? ceph_x_check_message_signature+0x58/0xc0 [libceph]
> ceph_con_process_message+0x73/0x140 [libceph]
> ceph_con_v1_try_read+0x2f2/0x860 [libceph]
> ceph_con_workfn+0x31e/0x660 [libceph]
> process_one_work+0x1cb/0x370
> worker_thread+0x30/0x390
> ? process_one_work+0x370/0x370
> kthread+0x13e/0x160
> ? set_kthread_struct+0x50/0x50
> ret_from_fork+0x1f/0x30
>
> task:kworker/u113:2 state:D stack: 0 pid:54267 ppid: 2
> flags:0x00004000
> Workqueue: ceph-msgr ceph_con_workfn [libceph]
> Call Trace:
> __schedule+0x3a9/0x8d0
> ? bit_wait_io+0x60/0x60
> ? bit_wait_io+0x60/0x60
> schedule+0x49/0xb0
> bit_wait+0xd/0x60
> __wait_on_bit+0x2a/0x90
> ? ceph_force_reconnect+0x90/0x90 [ceph]
> out_of_line_wait_on_bit+0x91/0xb0
> ? bitmap_empty+0x20/0x20
> ilookup5.part.29+0x69/0x90
> ? ceph_force_reconnect+0x90/0x90 [ceph]
> ? ceph_ino_compare+0x30/0x30 [ceph]
> iget5_locked+0x26/0x90
> ceph_get_inode+0x45/0x130 [ceph]
> ceph_readdir_prepopulate+0x59f/0xca0 [ceph]
> handle_reply+0x78d/0x7f0 [ceph]
> ? ceph_tcp_recvmsg+0x6f/0xa0 [libceph]
> mds_dispatch+0x10a/0x690 [ceph]
> ? calc_signature+0xdf/0x110 [libceph]
> ? ceph_x_check_message_signature+0x58/0xc0 [libceph]
> ceph_con_process_message+0x73/0x140 [libceph]
> ceph_con_v1_try_read+0x2f2/0x860 [libceph]
> ceph_con_workfn+0x31e/0x660 [libceph]
> process_one_work+0x1cb/0x370
> worker_thread+0x30/0x390
> ? process_one_work+0x370/0x370
> kthread+0x13e/0x160
> ? set_kthread_struct+0x50/0x50
> ret_from_fork+0x1f/0x30
>
> It's rather rear to be caught, but here's Fast Reproduce Steps
> (multiple mds is needed):
> 1. Try to find 2 different directories (DIR_a DIR_b) in a cephfs cluster
> and make sure they have different auth mds nodes. In this way, a client
> may have chances to run handle_reply on different CPU for our test
> (see step 5 and step 6).
> 2. In DIR_b, create a hard link of DIR_a/FILE_a, namely FILE_b.
> DIR_a/FILE_a and DIR_b/FILE_b have the same ino (123456 e.g)
> 3. Save ino in code below, make it sleep for stat command.
> ```
> static void handle_reply(struct ceph_mds_session
> *session, struct ceph_msg *msg)
> goto out_err;
> }
> req->r_target_inode = in;
> + if (in->i_ino == 123456) {
> + pr_err("inode %lu found, ready to wait 10 seconds.\n",
> + in->i_ino);
> + msleep(10000);
> + }
> ```
> 4. Execute echo 3 > /proc/sys/vm/drop_caches
> 5. In a shell, do `cd DIR_a;stat DIR_a/FILE_a`, we suppose to be stuck on this shell
> because of msleep() in handle_reply().
> 6. In the other shell, do `cd DIR_b;ls DIR_b/` to trigger ceph_readdir_prepopulate()
>
> Repeat step 4-6, less than 10 times is enough to see the problem.
>
> It turns out that commit bca9fc14c70f ("ceph: when filling trace, call ceph_get_inode outside of mutexes")
> moved ceph_inode_get outside snap_rmsem and made a chance for the deadlock of ceph_inode_get()
> and snap_rwsem.
>
> After the following commit, original commit(bca9fc14c70f) can be reverted safely.
> commit 6a92b08fdad2 ("ceph: don't take s_mutex or snap_rwsem in ceph_check_caps")
>
> Signed-off-by: Zhao Sun <sunzhao03@kuaishou.com>
> Signed-off-by: Li Lei <lilei24@kuaishou.com>
> ---
> fs/ceph/inode.c | 26 ++++++++++++++++++++++----
> fs/ceph/mds_client.c | 29 -----------------------------
> 2 files changed, 22 insertions(+), 33 deletions(-)
>
> diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
> index d99e12d..0c241a4 100644
> --- a/fs/ceph/inode.c
> +++ b/fs/ceph/inode.c
> @@ -1667,10 +1667,28 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
> }
>
> if (rinfo->head->is_target) {
> - /* Should be filled in by handle_reply */
> - BUG_ON(!req->r_target_inode);
> + in = xchg(&req->r_new_inode, NULL);
> + tvino.ino = le64_to_cpu(rinfo->targeti.in->ino);
> + tvino.snap = le64_to_cpu(rinfo->targeti.in->snapid);
> +
> + /*
> + * If we ended up opening an existing inode, discard
> + * r_new_inode
> + */
> + if (req->r_op == CEPH_MDS_OP_CREATE &&
> + !req->r_reply_info.has_create_ino) {
> + /* This should never happen on an async create */
> + WARN_ON_ONCE(req->r_deleg_ino);
> + iput(in);
> + in = NULL;
> + }
> +
> + in = ceph_get_inode(fsc->sb, tvino, in);
> + if (IS_ERR(in)) {
> + err = PTR_ERR(in);
> + goto done;
> + }
>
> - in = req->r_target_inode;
> err = ceph_fill_inode(in, req->r_locked_page, &rinfo->targeti,
> NULL, session,
> (!test_bit(CEPH_MDS_R_ABORTED, &req->r_req_flags) &&
> @@ -1680,13 +1698,13 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
> if (err < 0) {
> pr_err_client(cl, "badness %p %llx.%llx\n", in,
> ceph_vinop(in));
> - req->r_target_inode = NULL;
> if (inode_state_read_once(in) & I_NEW)
> discard_new_inode(in);
> else
> iput(in);
> goto done;
> }
> + req->r_target_inode = in;
> if (inode_state_read_once(in) & I_NEW)
> unlock_new_inode(in);
> }
> diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
> index b174627..8a27775 100644
> --- a/fs/ceph/mds_client.c
> +++ b/fs/ceph/mds_client.c
> @@ -3941,36 +3941,7 @@ static void handle_reply(struct ceph_mds_session *session, struct ceph_msg *msg)
> session->s_con.peer_features);
> mutex_unlock(&mdsc->mutex);
>
> - /* Must find target inode outside of mutexes to avoid deadlocks */
> rinfo = &req->r_reply_info;
> - if ((err >= 0) && rinfo->head->is_target) {
> - struct inode *in = xchg(&req->r_new_inode, NULL);
> - struct ceph_vino tvino = {
> - .ino = le64_to_cpu(rinfo->targeti.in->ino),
> - .snap = le64_to_cpu(rinfo->targeti.in->snapid)
> - };
> -
> - /*
> - * If we ended up opening an existing inode, discard
> - * r_new_inode
> - */
> - if (req->r_op == CEPH_MDS_OP_CREATE &&
> - !req->r_reply_info.has_create_ino) {
> - /* This should never happen on an async create */
> - WARN_ON_ONCE(req->r_deleg_ino);
> - iput(in);
> - in = NULL;
> - }
> -
> - in = ceph_get_inode(mdsc->fsc->sb, tvino, in);
> - if (IS_ERR(in)) {
> - err = PTR_ERR(in);
> - mutex_lock(&session->s_mutex);
> - goto out_err;
> - }
> - req->r_target_inode = in;
> - }
> -
> mutex_lock(&session->s_mutex);
> if (err < 0) {
> pr_err_client(cl, "got corrupt reply mds%d(tid:%lld)\n",
Let me run xfstests for the patch to double check that nothing is broken.
Thanks,
Slava.
^ permalink raw reply [flat|nested] 9+ messages in thread
* RE: [PATCH] Revert "ceph: when filling trace, call ceph_get_inode outside of mutexes"
2026-04-23 19:50 ` Viacheslav Dubeyko
@ 2026-04-24 18:11 ` Viacheslav Dubeyko
2026-04-24 18:54 ` 李磊
0 siblings, 1 reply; 9+ messages in thread
From: Viacheslav Dubeyko @ 2026-04-24 18:11 UTC (permalink / raw)
To: lilei24@kuaishou.com, idryomov@gmail.com, slava@dubeyko.com,
Alex Markuze
Cc: ceph-devel@vger.kernel.org, noctis.akm@gmail.com,
linux-kernel@vger.kernel.org, sunzhao03@kuaishou.com
On Thu, 2026-04-23 at 19:50 +0000, Viacheslav Dubeyko wrote:
> On Sat, 2026-04-18 at 12:12 +0800, Li Lei wrote:
> > This reverts commit bca9fc14c70fcbbebc84954cc39994e463fb9468.
> >
> > Deadlock detected between mdsc->snap_rwsem and the I_NEW bit in
> > handle_reply().
> >
> > - kworker/u113:1 (stat inode)
> > 1) Hold a inode with I_NEW set
> > 2) Request for mdsc->snap_rwsem
> > - kworker/u113:2 (readdir)
> > 1) Hold mdsc->snap_rwsem
> > 2) Wait for inode I_NEW flag to be cleared
> >
> > task:kworker/u113:1 state:D stack: 0 pid:34454 ppid: 2
> > flags:0x00004000
> > Workqueue: ceph-msgr ceph_con_workfn [libceph]
> > Call Trace:
> > __schedule+0x3a9/0x8d0
> > schedule+0x49/0xb0
> > rwsem_down_write_slowpath+0x30a/0x5e0
> > handle_reply+0x4d7/0x7f0 [ceph]
> > ? ceph_tcp_recvmsg+0x6f/0xa0 [libceph]
> > mds_dispatch+0x10a/0x690 [ceph]
> > ? calc_signature+0xdf/0x110 [libceph]
> > ? ceph_x_check_message_signature+0x58/0xc0 [libceph]
> > ceph_con_process_message+0x73/0x140 [libceph]
> > ceph_con_v1_try_read+0x2f2/0x860 [libceph]
> > ceph_con_workfn+0x31e/0x660 [libceph]
> > process_one_work+0x1cb/0x370
> > worker_thread+0x30/0x390
> > ? process_one_work+0x370/0x370
> > kthread+0x13e/0x160
> > ? set_kthread_struct+0x50/0x50
> > ret_from_fork+0x1f/0x30
> >
> > task:kworker/u113:2 state:D stack: 0 pid:54267 ppid: 2
> > flags:0x00004000
> > Workqueue: ceph-msgr ceph_con_workfn [libceph]
> > Call Trace:
> > __schedule+0x3a9/0x8d0
> > ? bit_wait_io+0x60/0x60
> > ? bit_wait_io+0x60/0x60
> > schedule+0x49/0xb0
> > bit_wait+0xd/0x60
> > __wait_on_bit+0x2a/0x90
> > ? ceph_force_reconnect+0x90/0x90 [ceph]
> > out_of_line_wait_on_bit+0x91/0xb0
> > ? bitmap_empty+0x20/0x20
> > ilookup5.part.29+0x69/0x90
> > ? ceph_force_reconnect+0x90/0x90 [ceph]
> > ? ceph_ino_compare+0x30/0x30 [ceph]
> > iget5_locked+0x26/0x90
> > ceph_get_inode+0x45/0x130 [ceph]
> > ceph_readdir_prepopulate+0x59f/0xca0 [ceph]
> > handle_reply+0x78d/0x7f0 [ceph]
> > ? ceph_tcp_recvmsg+0x6f/0xa0 [libceph]
> > mds_dispatch+0x10a/0x690 [ceph]
> > ? calc_signature+0xdf/0x110 [libceph]
> > ? ceph_x_check_message_signature+0x58/0xc0 [libceph]
> > ceph_con_process_message+0x73/0x140 [libceph]
> > ceph_con_v1_try_read+0x2f2/0x860 [libceph]
> > ceph_con_workfn+0x31e/0x660 [libceph]
> > process_one_work+0x1cb/0x370
> > worker_thread+0x30/0x390
> > ? process_one_work+0x370/0x370
> > kthread+0x13e/0x160
> > ? set_kthread_struct+0x50/0x50
> > ret_from_fork+0x1f/0x30
> >
> > It's rather rear to be caught, but here's Fast Reproduce Steps
> > (multiple mds is needed):
> > 1. Try to find 2 different directories (DIR_a DIR_b) in a cephfs cluster
> > and make sure they have different auth mds nodes. In this way, a client
> > may have chances to run handle_reply on different CPU for our test
> > (see step 5 and step 6).
> > 2. In DIR_b, create a hard link of DIR_a/FILE_a, namely FILE_b.
> > DIR_a/FILE_a and DIR_b/FILE_b have the same ino (123456 e.g)
> > 3. Save ino in code below, make it sleep for stat command.
> > ```
> > static void handle_reply(struct ceph_mds_session
> > *session, struct ceph_msg *msg)
> > goto out_err;
> > }
> > req->r_target_inode = in;
> > + if (in->i_ino == 123456) {
> > + pr_err("inode %lu found, ready to wait 10 seconds.\n",
> > + in->i_ino);
> > + msleep(10000);
> > + }
> > ```
> > 4. Execute echo 3 > /proc/sys/vm/drop_caches
> > 5. In a shell, do `cd DIR_a;stat DIR_a/FILE_a`, we suppose to be stuck on this shell
> > because of msleep() in handle_reply().
> > 6. In the other shell, do `cd DIR_b;ls DIR_b/` to trigger ceph_readdir_prepopulate()
> >
> > Repeat step 4-6, less than 10 times is enough to see the problem.
> >
> > It turns out that commit bca9fc14c70f ("ceph: when filling trace, call ceph_get_inode outside of mutexes")
> > moved ceph_inode_get outside snap_rmsem and made a chance for the deadlock of ceph_inode_get()
> > and snap_rwsem.
> >
> > After the following commit, original commit(bca9fc14c70f) can be reverted safely.
> > commit 6a92b08fdad2 ("ceph: don't take s_mutex or snap_rwsem in ceph_check_caps")
> >
> > Signed-off-by: Zhao Sun <sunzhao03@kuaishou.com>
> > Signed-off-by: Li Lei <lilei24@kuaishou.com>
> > ---
> > fs/ceph/inode.c | 26 ++++++++++++++++++++++----
> > fs/ceph/mds_client.c | 29 -----------------------------
> > 2 files changed, 22 insertions(+), 33 deletions(-)
> >
> > diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
> > index d99e12d..0c241a4 100644
> > --- a/fs/ceph/inode.c
> > +++ b/fs/ceph/inode.c
> > @@ -1667,10 +1667,28 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
> > }
> >
> > if (rinfo->head->is_target) {
> > - /* Should be filled in by handle_reply */
> > - BUG_ON(!req->r_target_inode);
> > + in = xchg(&req->r_new_inode, NULL);
> > + tvino.ino = le64_to_cpu(rinfo->targeti.in->ino);
> > + tvino.snap = le64_to_cpu(rinfo->targeti.in->snapid);
> > +
> > + /*
> > + * If we ended up opening an existing inode, discard
> > + * r_new_inode
> > + */
> > + if (req->r_op == CEPH_MDS_OP_CREATE &&
> > + !req->r_reply_info.has_create_ino) {
> > + /* This should never happen on an async create */
> > + WARN_ON_ONCE(req->r_deleg_ino);
> > + iput(in);
> > + in = NULL;
> > + }
> > +
> > + in = ceph_get_inode(fsc->sb, tvino, in);
> > + if (IS_ERR(in)) {
> > + err = PTR_ERR(in);
> > + goto done;
> > + }
> >
> > - in = req->r_target_inode;
> > err = ceph_fill_inode(in, req->r_locked_page, &rinfo->targeti,
> > NULL, session,
> > (!test_bit(CEPH_MDS_R_ABORTED, &req->r_req_flags) &&
> > @@ -1680,13 +1698,13 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
> > if (err < 0) {
> > pr_err_client(cl, "badness %p %llx.%llx\n", in,
> > ceph_vinop(in));
> > - req->r_target_inode = NULL;
> > if (inode_state_read_once(in) & I_NEW)
> > discard_new_inode(in);
> > else
> > iput(in);
> > goto done;
> > }
> > + req->r_target_inode = in;
> > if (inode_state_read_once(in) & I_NEW)
> > unlock_new_inode(in);
> > }
> > diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
> > index b174627..8a27775 100644
> > --- a/fs/ceph/mds_client.c
> > +++ b/fs/ceph/mds_client.c
> > @@ -3941,36 +3941,7 @@ static void handle_reply(struct ceph_mds_session *session, struct ceph_msg *msg)
> > session->s_con.peer_features);
> > mutex_unlock(&mdsc->mutex);
> >
> > - /* Must find target inode outside of mutexes to avoid deadlocks */
> > rinfo = &req->r_reply_info;
> > - if ((err >= 0) && rinfo->head->is_target) {
> > - struct inode *in = xchg(&req->r_new_inode, NULL);
> > - struct ceph_vino tvino = {
> > - .ino = le64_to_cpu(rinfo->targeti.in->ino),
> > - .snap = le64_to_cpu(rinfo->targeti.in->snapid)
> > - };
> > -
> > - /*
> > - * If we ended up opening an existing inode, discard
> > - * r_new_inode
> > - */
> > - if (req->r_op == CEPH_MDS_OP_CREATE &&
> > - !req->r_reply_info.has_create_ino) {
> > - /* This should never happen on an async create */
> > - WARN_ON_ONCE(req->r_deleg_ino);
> > - iput(in);
> > - in = NULL;
> > - }
> > -
> > - in = ceph_get_inode(mdsc->fsc->sb, tvino, in);
> > - if (IS_ERR(in)) {
> > - err = PTR_ERR(in);
> > - mutex_lock(&session->s_mutex);
> > - goto out_err;
> > - }
> > - req->r_target_inode = in;
> > - }
> > -
> > mutex_lock(&session->s_mutex);
> > if (err < 0) {
> > pr_err_client(cl, "got corrupt reply mds%d(tid:%lld)\n",
>
> Let me run xfstests for the patch to double check that nothing is broken.
>
>
Unexpectedly, I have a crash on generic/701 with your patch applied. I am going
to re-run xfstests without your patch.
generic/701 50s ... [failed, exit status 137]_check_dmesg: something found in
dmesg (see xfstests-dev/results//generic/701.dmesg)
- output mismatch (see xfstests-dev/results//generic/701.out.bad)
--- tests/generic/701.out 2026-04-01 19:47:53.878452478 -0700
+++ xfstests-dev/results//generic/701.out.bad 2026-04-24
04:57:37.887700953 -0700
@@ -1,2 +1,3 @@
QA output created by 701
-Number of allocated blocks after truncate is in range
+pwrite: Permission denied
+./check: line 700: 311685 Killed systemd-run --quiet --
unit "${unit}" --scope "${cmd[@]}"
...
(Run 'diff -u xfstests-dev/tests/generic/701.out xfstests-
dev/results//generic/701.out.bad' to see the entire diff)
Apr 23 22:42:26 ssdfs-vm-test kernel: [29717.030383] ceph: [2bf3efd7-3e1c-453d-
9665-3c75a0dc96dd 19499]: mds0 caps stale
Apr 23 22:43:18 ssdfs-vm-test kernel: [29733.938331] INFO: task
kworker/u32:0:241092 blocked for more than 122 seconds.
Apr 23 22:43:25 ssdfs-vm-test kernel: [29733.940727] Not tainted 7.0.0+ #9
Apr 23 22:43:25 ssdfs-vm-test kernel: [29733.941449] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
Apr 23 22:43:25 ssdfs-vm-test kernel: [29733.942756] task:kworker/u32:0
state:D stack:0 pid:241092 tgid:241092 ppid:2 task_flags:0x4208060
flags:0x00080000
Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942773] Workqueue: writeback
wb_workfn (flush-ceph-766)
Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942792] Call Trace:
Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942795] <TASK>
Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942803] __schedule+0x5b7/0x1fe0
Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942812] ? lock_acquire+0xc8/0x2f0
Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942819] ? schedule+0xb1/0x180
Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942823] ? find_held_lock+0x31/0x90
Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942829] ? schedule+0x10e/0x180
Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942834] ? lock_release+0xd9/0x300
Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942842] schedule+0x3a/0x180
Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942847]
schedule_preempt_disabled+0x15/0x30
Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942851] __mutex_lock+0x777/0x1220
Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942858] ? ceph_con_send+0x4c/0x250
Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942876]
mutex_lock_nested+0x1b/0x30
Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942881] ?
mutex_lock_nested+0x1b/0x30
Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943136] ceph_con_send+0x4c/0x250
Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943146] ? lock_acquire+0xc8/0x2f0
Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943158] send_request+0x7d0/0xcc0
Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943174] ?
do_raw_spin_unlock+0x4e/0xe0
Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943182] ?
_raw_spin_unlock+0x22/0x50
Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943222]
__submit_request+0x2a9/0x4c0
Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943236]
ceph_osdc_start_request+0x6c/0x90
Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943245]
ceph_writepages_start+0x101b/0x21e0
Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943254] ?
__lock_acquire+0x4a2/0x2650
Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943331] do_writepages+0xc2/0x170
Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943338] ? do_writepages+0xc2/0x170
Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943342] ? lock_release+0xd9/0x300
Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943347] ?
writeback_sb_inodes+0x1e7/0x900
Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943358]
__writeback_single_inode+0x5a/0x5e0
Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943365] ?
_raw_spin_unlock+0x22/0x50
Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943377]
writeback_sb_inodes+0x29f/0x900
Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943435]
__writeback_inodes_wb+0x54/0xf0
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943448] wb_writeback+0x3df/0x470
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943473] wb_workfn+0x364/0x520
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943493]
process_one_work+0x22f/0x620
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943515] worker_thread+0x1e2/0x400
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943525] ?
__pfx_worker_thread+0x10/0x10
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943532] kthread+0x109/0x140
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943539] ? __pfx_kthread+0x10/0x10
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943548] ret_from_fork+0x3e1/0x470
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943556] ? __pfx_kthread+0x10/0x10
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943561] ? __pfx_kthread+0x10/0x10
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943568]
ret_from_fork_asm+0x1a/0x30
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943595] </TASK>
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943669] INFO: task
kworker/u32:0:241092 is blocked on a mutex likely owned by task
kworker/7:4:308292.
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953769] task:kworker/7:4
state:R running task stack:0 pid:308292 tgid:308292 ppid:2
task_flags:0x4208860 flags:0x00080000
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953786] Workqueue: ceph-msgr
ceph_con_workfn
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953795] Call Trace:
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953798] <TASK>
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953805] __schedule+0x5b7/0x1fe0
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953809] ?
irqentry_exit+0x299/0x740
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953843] ?
sysvec_apic_timer_interrupt+0x54/0xd0
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953858] preempt_schedule+0x4c/0x80
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953862]
preempt_schedule_thunk+0x16/0x30
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953874]
_raw_spin_unlock_irq+0x4f/0x70
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953880] evict_folios+0x394/0xf00
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953886] ?
hrtimer_start_range_ns+0x270/0x980
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953904] ?
lock_is_held_type+0xaa/0x140
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953916]
try_to_shrink_lruvec+0x1a9/0x3b0
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953934] shrink_one+0xd1/0x1c0
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953943] shrink_node+0xba8/0x13d0
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953954] ? shrink_node+0xb7e/0x13d0
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953969]
do_try_to_free_pages+0xc1/0x4f0
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953974] ?
do_try_to_free_pages+0xc1/0x4f0
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953984]
try_to_free_pages+0xe0/0x310
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953989] ?
fs_reclaim_acquire+0x4f/0x100
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954009]
__alloc_frozen_pages_noprof+0xd49/0x1830
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954017] ?
ip_dst_mtu_maybe_forward.constprop.0+0x24/0x2a0
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954034] ? lock_acquire+0xc8/0x2f0
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954039] ?
__virt_addr_valid+0xf5/0x340
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954053] ?
policy_nodemask+0x122/0x1d0
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954064]
alloc_pages_mpol+0xba/0x1a0
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954069] ?
_copy_from_iter+0x273/0x790
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954080]
alloc_pages_noprof+0x59/0xe0
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954087]
skb_page_frag_refill+0xc4/0xf0
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954093]
sk_page_frag_refill+0x21/0xc0
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954099]
tcp_sendmsg_locked+0x848/0x14e0
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954122] tcp_sendmsg+0x30/0x60
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954128] inet_sendmsg+0x46/0x90
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954135] sock_sendmsg+0xf1/0x100
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954142] ? kfree+0x3ab/0x560
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954147] ?
__free_frozen_pages+0x6e0/0x730
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954157] ? sg_free_table+0x39/0x90
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954167] do_sendmsg+0x8d/0xe0
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954197]
ceph_con_v2_try_write+0x99/0x560
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954206]
ceph_con_workfn+0x17f/0x820
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954215]
process_one_work+0x22f/0x620
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954221] ?
process_one_work+0x254/0x620
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954252] worker_thread+0x1e2/0x400
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954260] ?
__pfx_worker_thread+0x10/0x10
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954265] kthread+0x109/0x140
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954271] ? __pfx_kthread+0x10/0x10
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954278] ret_from_fork+0x3e1/0x470
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954283] ? __pfx_kthread+0x10/0x10
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954287] ? __pfx_kthread+0x10/0x10
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954293]
ret_from_fork_asm+0x1a/0x30
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954314] </TASK>
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954321] INFO: task
kworker/7:2:299107 blocked for more than 122 seconds.
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.955829] Not tainted 7.0.0+ #9
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.959909] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961590] task:kworker/7:2
state:D stack:0 pid:299107 tgid:299107 ppid:2 task_flags:0x4208060
flags:0x00080000
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961598] Workqueue: events
delayed_work
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961607] Call Trace:
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961610] <TASK>
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961618] __schedule+0x5b7/0x1fe0
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961622] ?
mark_held_locks+0x46/0x90
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961629] ? find_held_lock+0x31/0x90
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961634] ? schedule+0x10e/0x180
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961639] ? lock_release+0xd9/0x300
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961649] schedule+0x3a/0x180
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961653]
schedule_preempt_disabled+0x15/0x30
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961658]
rwsem_down_read_slowpath+0x27b/0x6c0
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961662] ?
ceph_send_cap_releases+0x4b/0x370
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961672] ?
__queue_delayed_work+0x134/0x190
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961682] down_read+0x7c/0x190
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961687]
ceph_send_cap_releases+0x4b/0x370
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961692] ?
ceph_con_send+0x1e8/0x250
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961705] delayed_work+0x1d8/0x2b0
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961715]
process_one_work+0x22f/0x620
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961730] worker_thread+0x1e2/0x400
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961738] ?
__pfx_worker_thread+0x10/0x10
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961744] kthread+0x109/0x140
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961749] ? __pfx_kthread+0x10/0x10
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961756] ret_from_fork+0x3e1/0x470
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961787] ? __pfx_kthread+0x10/0x10
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961791] ? __pfx_kthread+0x10/0x10
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961797]
ret_from_fork_asm+0x1a/0x30
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961818] </TASK>
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961851] INFO: task
kworker/7:2:299107 <reader> blocked on an rw-semaphore likely owned by task
kworker/u32:0:241092 <reader>
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.972571] INFO: task
kworker/7:5:309930 blocked for more than 122 seconds.
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.973737] Not tainted 7.0.0+ #9
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.974413] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978473] task:kworker/7:5
state:D stack:0 pid:309930 tgid:309930 ppid:2 task_flags:0x4208060
flags:0x00080000
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978479] Workqueue: events
handle_osds_timeout
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978485] Call Trace:
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978487] <TASK>
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978493] __schedule+0x5b7/0x1fe0
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978496] ?
mark_held_locks+0x46/0x90
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978502] ? find_held_lock+0x31/0x90
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978506] ? schedule+0x10e/0x180
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978509] ? lock_release+0xd9/0x300
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978537] schedule+0x3a/0x180
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978540]
schedule_preempt_disabled+0x15/0x30
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978543]
rwsem_down_write_slowpath+0x1f3/0x990
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978547] ? lock_acquire+0xc8/0x2f0
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978556] ?
process_one_work+0x1ef/0x620
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978563] down_write+0xe4/0xf0
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978567]
handle_osds_timeout+0x45/0x160
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978573]
process_one_work+0x22f/0x620
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978585] worker_thread+0x1e2/0x400
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978591] ?
__pfx_worker_thread+0x10/0x10
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978595] kthread+0x109/0x140
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978599] ? __pfx_kthread+0x10/0x10
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978604] ret_from_fork+0x3e1/0x470
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978608] ? __pfx_kthread+0x10/0x10
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978611] ? __pfx_kthread+0x10/0x10
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978615]
ret_from_fork_asm+0x1a/0x30
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978631] </TASK>
Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978653] INFO: task
kworker/7:5:309930 <writer> blocked on an rw-semaphore likely owned by task
kworker/u32:0:241092 <reader>
I am not completely sure that your patch is responsible.
Thanks,
Slava.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: RE: [PATCH] Revert "ceph: when filling trace, call ceph_get_inode outside of mutexes"
2026-04-24 18:11 ` Viacheslav Dubeyko
@ 2026-04-24 18:54 ` 李磊
2026-04-27 19:55 ` Viacheslav Dubeyko
0 siblings, 1 reply; 9+ messages in thread
From: 李磊 @ 2026-04-24 18:54 UTC (permalink / raw)
To: Viacheslav Dubeyko
Cc: 李磊, idryomov@gmail.com, slava@dubeyko.com,
Alex Markuze, ceph-devel@vger.kernel.org, noctis.akm@gmail.com,
linux-kernel@vger.kernel.org, 孙朝
> 2026年4月25日 02:11,Viacheslav Dubeyko <Slava.Dubeyko@ibm.com> 写道:
>
> 安全提示:此邮件来自公司外部。除非您确认发件人身份可信且邮件内容不含可疑信息,否则请勿回复或转发邮件、点击邮件链接或打开附件。
>
>
> On Thu, 2026-04-23 at 19:50 +0000, Viacheslav Dubeyko wrote:
>> On Sat, 2026-04-18 at 12:12 +0800, Li Lei wrote:
>>> This reverts commit bca9fc14c70fcbbebc84954cc39994e463fb9468.
>>>
>>> Deadlock detected between mdsc->snap_rwsem and the I_NEW bit in
>>> handle_reply().
>>>
>>> - kworker/u113:1 (stat inode)
>>> 1) Hold a inode with I_NEW set
>>> 2) Request for mdsc->snap_rwsem
>>> - kworker/u113:2 (readdir)
>>> 1) Hold mdsc->snap_rwsem
>>> 2) Wait for inode I_NEW flag to be cleared
>>>
>>> task:kworker/u113:1 state:D stack: 0 pid:34454 ppid: 2
>>> flags:0x00004000
>>> Workqueue: ceph-msgr ceph_con_workfn [libceph]
>>> Call Trace:
>>> __schedule+0x3a9/0x8d0
>>> schedule+0x49/0xb0
>>> rwsem_down_write_slowpath+0x30a/0x5e0
>>> handle_reply+0x4d7/0x7f0 [ceph]
>>> ? ceph_tcp_recvmsg+0x6f/0xa0 [libceph]
>>> mds_dispatch+0x10a/0x690 [ceph]
>>> ? calc_signature+0xdf/0x110 [libceph]
>>> ? ceph_x_check_message_signature+0x58/0xc0 [libceph]
>>> ceph_con_process_message+0x73/0x140 [libceph]
>>> ceph_con_v1_try_read+0x2f2/0x860 [libceph]
>>> ceph_con_workfn+0x31e/0x660 [libceph]
>>> process_one_work+0x1cb/0x370
>>> worker_thread+0x30/0x390
>>> ? process_one_work+0x370/0x370
>>> kthread+0x13e/0x160
>>> ? set_kthread_struct+0x50/0x50
>>> ret_from_fork+0x1f/0x30
>>>
>>> task:kworker/u113:2 state:D stack: 0 pid:54267 ppid: 2
>>> flags:0x00004000
>>> Workqueue: ceph-msgr ceph_con_workfn [libceph]
>>> Call Trace:
>>> __schedule+0x3a9/0x8d0
>>> ? bit_wait_io+0x60/0x60
>>> ? bit_wait_io+0x60/0x60
>>> schedule+0x49/0xb0
>>> bit_wait+0xd/0x60
>>> __wait_on_bit+0x2a/0x90
>>> ? ceph_force_reconnect+0x90/0x90 [ceph]
>>> out_of_line_wait_on_bit+0x91/0xb0
>>> ? bitmap_empty+0x20/0x20
>>> ilookup5.part.29+0x69/0x90
>>> ? ceph_force_reconnect+0x90/0x90 [ceph]
>>> ? ceph_ino_compare+0x30/0x30 [ceph]
>>> iget5_locked+0x26/0x90
>>> ceph_get_inode+0x45/0x130 [ceph]
>>> ceph_readdir_prepopulate+0x59f/0xca0 [ceph]
>>> handle_reply+0x78d/0x7f0 [ceph]
>>> ? ceph_tcp_recvmsg+0x6f/0xa0 [libceph]
>>> mds_dispatch+0x10a/0x690 [ceph]
>>> ? calc_signature+0xdf/0x110 [libceph]
>>> ? ceph_x_check_message_signature+0x58/0xc0 [libceph]
>>> ceph_con_process_message+0x73/0x140 [libceph]
>>> ceph_con_v1_try_read+0x2f2/0x860 [libceph]
>>> ceph_con_workfn+0x31e/0x660 [libceph]
>>> process_one_work+0x1cb/0x370
>>> worker_thread+0x30/0x390
>>> ? process_one_work+0x370/0x370
>>> kthread+0x13e/0x160
>>> ? set_kthread_struct+0x50/0x50
>>> ret_from_fork+0x1f/0x30
>>>
>>> It's rather rear to be caught, but here's Fast Reproduce Steps
>>> (multiple mds is needed):
>>> 1. Try to find 2 different directories (DIR_a DIR_b) in a cephfs cluster
>>> and make sure they have different auth mds nodes. In this way, a client
>>> may have chances to run handle_reply on different CPU for our test
>>> (see step 5 and step 6).
>>> 2. In DIR_b, create a hard link of DIR_a/FILE_a, namely FILE_b.
>>> DIR_a/FILE_a and DIR_b/FILE_b have the same ino (123456 e.g)
>>> 3. Save ino in code below, make it sleep for stat command.
>>> ```
>>> static void handle_reply(struct ceph_mds_session
>>> *session, struct ceph_msg *msg)
>>> goto out_err;
>>> }
>>> req->r_target_inode = in;
>>> + if (in->i_ino == 123456) {
>>> + pr_err("inode %lu found, ready to wait 10 seconds.\n",
>>> + in->i_ino);
>>> + msleep(10000);
>>> + }
>>> ```
>>> 4. Execute echo 3 > /proc/sys/vm/drop_caches
>>> 5. In a shell, do `cd DIR_a;stat DIR_a/FILE_a`, we suppose to be stuck on this shell
>>> because of msleep() in handle_reply().
>>> 6. In the other shell, do `cd DIR_b;ls DIR_b/` to trigger ceph_readdir_prepopulate()
>>>
>>> Repeat step 4-6, less than 10 times is enough to see the problem.
>>>
>>> It turns out that commit bca9fc14c70f ("ceph: when filling trace, call ceph_get_inode outside of mutexes")
>>> moved ceph_inode_get outside snap_rmsem and made a chance for the deadlock of ceph_inode_get()
>>> and snap_rwsem.
>>>
>>> After the following commit, original commit(bca9fc14c70f) can be reverted safely.
>>> commit 6a92b08fdad2 ("ceph: don't take s_mutex or snap_rwsem in ceph_check_caps")
>>>
>>> Signed-off-by: Zhao Sun <sunzhao03@kuaishou.com>
>>> Signed-off-by: Li Lei <lilei24@kuaishou.com>
>>> ---
>>> fs/ceph/inode.c | 26 ++++++++++++++++++++++----
>>> fs/ceph/mds_client.c | 29 -----------------------------
>>> 2 files changed, 22 insertions(+), 33 deletions(-)
>>>
>>> diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
>>> index d99e12d..0c241a4 100644
>>> --- a/fs/ceph/inode.c
>>> +++ b/fs/ceph/inode.c
>>> @@ -1667,10 +1667,28 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
>>> }
>>>
>>> if (rinfo->head->is_target) {
>>> - /* Should be filled in by handle_reply */
>>> - BUG_ON(!req->r_target_inode);
>>> + in = xchg(&req->r_new_inode, NULL);
>>> + tvino.ino = le64_to_cpu(rinfo->targeti.in->ino);
>>> + tvino.snap = le64_to_cpu(rinfo->targeti.in->snapid);
>>> +
>>> + /*
>>> + * If we ended up opening an existing inode, discard
>>> + * r_new_inode
>>> + */
>>> + if (req->r_op == CEPH_MDS_OP_CREATE &&
>>> + !req->r_reply_info.has_create_ino) {
>>> + /* This should never happen on an async create */
>>> + WARN_ON_ONCE(req->r_deleg_ino);
>>> + iput(in);
>>> + in = NULL;
>>> + }
>>> +
>>> + in = ceph_get_inode(fsc->sb, tvino, in);
>>> + if (IS_ERR(in)) {
>>> + err = PTR_ERR(in);
>>> + goto done;
>>> + }
>>>
>>> - in = req->r_target_inode;
>>> err = ceph_fill_inode(in, req->r_locked_page, &rinfo->targeti,
>>> NULL, session,
>>> (!test_bit(CEPH_MDS_R_ABORTED, &req->r_req_flags) &&
>>> @@ -1680,13 +1698,13 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
>>> if (err < 0) {
>>> pr_err_client(cl, "badness %p %llx.%llx\n", in,
>>> ceph_vinop(in));
>>> - req->r_target_inode = NULL;
>>> if (inode_state_read_once(in) & I_NEW)
>>> discard_new_inode(in);
>>> else
>>> iput(in);
>>> goto done;
>>> }
>>> + req->r_target_inode = in;
>>> if (inode_state_read_once(in) & I_NEW)
>>> unlock_new_inode(in);
>>> }
>>> diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
>>> index b174627..8a27775 100644
>>> --- a/fs/ceph/mds_client.c
>>> +++ b/fs/ceph/mds_client.c
>>> @@ -3941,36 +3941,7 @@ static void handle_reply(struct ceph_mds_session *session, struct ceph_msg *msg)
>>> session->s_con.peer_features);
>>> mutex_unlock(&mdsc->mutex);
>>>
>>> - /* Must find target inode outside of mutexes to avoid deadlocks */
>>> rinfo = &req->r_reply_info;
>>> - if ((err >= 0) && rinfo->head->is_target) {
>>> - struct inode *in = xchg(&req->r_new_inode, NULL);
>>> - struct ceph_vino tvino = {
>>> - .ino = le64_to_cpu(rinfo->targeti.in->ino),
>>> - .snap = le64_to_cpu(rinfo->targeti.in->snapid)
>>> - };
>>> -
>>> - /*
>>> - * If we ended up opening an existing inode, discard
>>> - * r_new_inode
>>> - */
>>> - if (req->r_op == CEPH_MDS_OP_CREATE &&
>>> - !req->r_reply_info.has_create_ino) {
>>> - /* This should never happen on an async create */
>>> - WARN_ON_ONCE(req->r_deleg_ino);
>>> - iput(in);
>>> - in = NULL;
>>> - }
>>> -
>>> - in = ceph_get_inode(mdsc->fsc->sb, tvino, in);
>>> - if (IS_ERR(in)) {
>>> - err = PTR_ERR(in);
>>> - mutex_lock(&session->s_mutex);
>>> - goto out_err;
>>> - }
>>> - req->r_target_inode = in;
>>> - }
>>> -
>>> mutex_lock(&session->s_mutex);
>>> if (err < 0) {
>>> pr_err_client(cl, "got corrupt reply mds%d(tid:%lld)\n",
>>
>> Let me run xfstests for the patch to double check that nothing is broken.
>>
>>
>
> Unexpectedly, I have a crash on generic/701 with your patch applied. I am going
> to re-run xfstests without your patch.
>
> generic/701 50s ... [failed, exit status 137]_check_dmesg: something found in
> dmesg (see xfstests-dev/results//generic/701.dmesg)
> - output mismatch (see xfstests-dev/results//generic/701.out.bad)
> --- tests/generic/701.out 2026-04-01 19:47:53.878452478 -0700
> +++ xfstests-dev/results//generic/701.out.bad 2026-04-24
> 04:57:37.887700953 -0700
> @@ -1,2 +1,3 @@
> QA output created by 701
> -Number of allocated blocks after truncate is in range
> +pwrite: Permission denied
> +./check: line 700: 311685 Killed systemd-run --quiet --
> unit "${unit}" --scope "${cmd[@]}"
> ...
> (Run 'diff -u xfstests-dev/tests/generic/701.out xfstests-
> dev/results//generic/701.out.bad' to see the entire diff)
>
> Apr 23 22:42:26 ssdfs-vm-test kernel: [29717.030383] ceph: [2bf3efd7-3e1c-453d-
> 9665-3c75a0dc96dd 19499]: mds0 caps stale
> Apr 23 22:43:18 ssdfs-vm-test kernel: [29733.938331] INFO: task
> kworker/u32:0:241092 blocked for more than 122 seconds.
> Apr 23 22:43:25 ssdfs-vm-test kernel: [29733.940727] Not tainted 7.0.0+ #9
> Apr 23 22:43:25 ssdfs-vm-test kernel: [29733.941449] "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Apr 23 22:43:25 ssdfs-vm-test kernel: [29733.942756] task:kworker/u32:0
> state:D stack:0 pid:241092 tgid:241092 ppid:2 task_flags:0x4208060
> flags:0x00080000
> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942773] Workqueue: writeback
> wb_workfn (flush-ceph-766)
> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942792] Call Trace:
> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942795] <TASK>
> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942803] __schedule+0x5b7/0x1fe0
> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942812] ? lock_acquire+0xc8/0x2f0
> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942819] ? schedule+0xb1/0x180
> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942823] ? find_held_lock+0x31/0x90
> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942829] ? schedule+0x10e/0x180
> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942834] ? lock_release+0xd9/0x300
> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942842] schedule+0x3a/0x180
> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942847]
> schedule_preempt_disabled+0x15/0x30
> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942851] __mutex_lock+0x777/0x1220
> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942858] ? ceph_con_send+0x4c/0x250
> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942876]
> mutex_lock_nested+0x1b/0x30
> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942881] ?
> mutex_lock_nested+0x1b/0x30
> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943136] ceph_con_send+0x4c/0x250
> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943146] ? lock_acquire+0xc8/0x2f0
> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943158] send_request+0x7d0/0xcc0
> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943174] ?
> do_raw_spin_unlock+0x4e/0xe0
> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943182] ?
> _raw_spin_unlock+0x22/0x50
> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943222]
> __submit_request+0x2a9/0x4c0
> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943236]
> ceph_osdc_start_request+0x6c/0x90
> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943245]
> ceph_writepages_start+0x101b/0x21e0
> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943254] ?
> __lock_acquire+0x4a2/0x2650
> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943331] do_writepages+0xc2/0x170
> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943338] ? do_writepages+0xc2/0x170
> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943342] ? lock_release+0xd9/0x300
> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943347] ?
> writeback_sb_inodes+0x1e7/0x900
> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943358]
> __writeback_single_inode+0x5a/0x5e0
> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943365] ?
> _raw_spin_unlock+0x22/0x50
> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943377]
> writeback_sb_inodes+0x29f/0x900
> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943435]
> __writeback_inodes_wb+0x54/0xf0
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943448] wb_writeback+0x3df/0x470
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943473] wb_workfn+0x364/0x520
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943493]
> process_one_work+0x22f/0x620
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943515] worker_thread+0x1e2/0x400
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943525] ?
> __pfx_worker_thread+0x10/0x10
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943532] kthread+0x109/0x140
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943539] ? __pfx_kthread+0x10/0x10
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943548] ret_from_fork+0x3e1/0x470
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943556] ? __pfx_kthread+0x10/0x10
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943561] ? __pfx_kthread+0x10/0x10
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943568]
> ret_from_fork_asm+0x1a/0x30
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943595] </TASK>
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943669] INFO: task
> kworker/u32:0:241092 is blocked on a mutex likely owned by task
> kworker/7:4:308292.
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953769] task:kworker/7:4
> state:R running task stack:0 pid:308292 tgid:308292 ppid:2
> task_flags:0x4208860 flags:0x00080000
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953786] Workqueue: ceph-msgr
> ceph_con_workfn
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953795] Call Trace:
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953798] <TASK>
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953805] __schedule+0x5b7/0x1fe0
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953809] ?
> irqentry_exit+0x299/0x740
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953843] ?
> sysvec_apic_timer_interrupt+0x54/0xd0
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953858] preempt_schedule+0x4c/0x80
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953862]
> preempt_schedule_thunk+0x16/0x30
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953874]
> _raw_spin_unlock_irq+0x4f/0x70
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953880] evict_folios+0x394/0xf00
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953886] ?
> hrtimer_start_range_ns+0x270/0x980
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953904] ?
> lock_is_held_type+0xaa/0x140
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953916]
> try_to_shrink_lruvec+0x1a9/0x3b0
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953934] shrink_one+0xd1/0x1c0
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953943] shrink_node+0xba8/0x13d0
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953954] ? shrink_node+0xb7e/0x13d0
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953969]
> do_try_to_free_pages+0xc1/0x4f0
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953974] ?
> do_try_to_free_pages+0xc1/0x4f0
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953984]
> try_to_free_pages+0xe0/0x310
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953989] ?
> fs_reclaim_acquire+0x4f/0x100
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954009]
> __alloc_frozen_pages_noprof+0xd49/0x1830
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954017] ?
> ip_dst_mtu_maybe_forward.constprop.0+0x24/0x2a0
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954034] ? lock_acquire+0xc8/0x2f0
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954039] ?
> __virt_addr_valid+0xf5/0x340
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954053] ?
> policy_nodemask+0x122/0x1d0
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954064]
> alloc_pages_mpol+0xba/0x1a0
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954069] ?
> _copy_from_iter+0x273/0x790
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954080]
> alloc_pages_noprof+0x59/0xe0
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954087]
> skb_page_frag_refill+0xc4/0xf0
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954093]
> sk_page_frag_refill+0x21/0xc0
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954099]
> tcp_sendmsg_locked+0x848/0x14e0
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954122] tcp_sendmsg+0x30/0x60
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954128] inet_sendmsg+0x46/0x90
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954135] sock_sendmsg+0xf1/0x100
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954142] ? kfree+0x3ab/0x560
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954147] ?
> __free_frozen_pages+0x6e0/0x730
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954157] ? sg_free_table+0x39/0x90
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954167] do_sendmsg+0x8d/0xe0
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954197]
> ceph_con_v2_try_write+0x99/0x560
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954206]
> ceph_con_workfn+0x17f/0x820
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954215]
> process_one_work+0x22f/0x620
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954221] ?
> process_one_work+0x254/0x620
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954252] worker_thread+0x1e2/0x400
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954260] ?
> __pfx_worker_thread+0x10/0x10
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954265] kthread+0x109/0x140
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954271] ? __pfx_kthread+0x10/0x10
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954278] ret_from_fork+0x3e1/0x470
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954283] ? __pfx_kthread+0x10/0x10
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954287] ? __pfx_kthread+0x10/0x10
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954293]
> ret_from_fork_asm+0x1a/0x30
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954314] </TASK>
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954321] INFO: task
> kworker/7:2:299107 blocked for more than 122 seconds.
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.955829] Not tainted 7.0.0+ #9
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.959909] "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961590] task:kworker/7:2
> state:D stack:0 pid:299107 tgid:299107 ppid:2 task_flags:0x4208060
> flags:0x00080000
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961598] Workqueue: events
> delayed_work
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961607] Call Trace:
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961610] <TASK>
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961618] __schedule+0x5b7/0x1fe0
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961622] ?
> mark_held_locks+0x46/0x90
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961629] ? find_held_lock+0x31/0x90
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961634] ? schedule+0x10e/0x180
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961639] ? lock_release+0xd9/0x300
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961649] schedule+0x3a/0x180
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961653]
> schedule_preempt_disabled+0x15/0x30
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961658]
> rwsem_down_read_slowpath+0x27b/0x6c0
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961662] ?
> ceph_send_cap_releases+0x4b/0x370
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961672] ?
> __queue_delayed_work+0x134/0x190
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961682] down_read+0x7c/0x190
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961687]
> ceph_send_cap_releases+0x4b/0x370
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961692] ?
> ceph_con_send+0x1e8/0x250
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961705] delayed_work+0x1d8/0x2b0
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961715]
> process_one_work+0x22f/0x620
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961730] worker_thread+0x1e2/0x400
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961738] ?
> __pfx_worker_thread+0x10/0x10
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961744] kthread+0x109/0x140
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961749] ? __pfx_kthread+0x10/0x10
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961756] ret_from_fork+0x3e1/0x470
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961787] ? __pfx_kthread+0x10/0x10
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961791] ? __pfx_kthread+0x10/0x10
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961797]
> ret_from_fork_asm+0x1a/0x30
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961818] </TASK>
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961851] INFO: task
> kworker/7:2:299107 <reader> blocked on an rw-semaphore likely owned by task
> kworker/u32:0:241092 <reader>
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.972571] INFO: task
> kworker/7:5:309930 blocked for more than 122 seconds.
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.973737] Not tainted 7.0.0+ #9
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.974413] "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978473] task:kworker/7:5
> state:D stack:0 pid:309930 tgid:309930 ppid:2 task_flags:0x4208060
> flags:0x00080000
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978479] Workqueue: events
> handle_osds_timeout
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978485] Call Trace:
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978487] <TASK>
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978493] __schedule+0x5b7/0x1fe0
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978496] ?
> mark_held_locks+0x46/0x90
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978502] ? find_held_lock+0x31/0x90
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978506] ? schedule+0x10e/0x180
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978509] ? lock_release+0xd9/0x300
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978537] schedule+0x3a/0x180
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978540]
> schedule_preempt_disabled+0x15/0x30
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978543]
> rwsem_down_write_slowpath+0x1f3/0x990
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978547] ? lock_acquire+0xc8/0x2f0
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978556] ?
> process_one_work+0x1ef/0x620
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978563] down_write+0xe4/0xf0
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978567]
> handle_osds_timeout+0x45/0x160
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978573]
> process_one_work+0x22f/0x620
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978585] worker_thread+0x1e2/0x400
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978591] ?
> __pfx_worker_thread+0x10/0x10
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978595] kthread+0x109/0x140
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978599] ? __pfx_kthread+0x10/0x10
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978604] ret_from_fork+0x3e1/0x470
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978608] ? __pfx_kthread+0x10/0x10
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978611] ? __pfx_kthread+0x10/0x10
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978615]
> ret_from_fork_asm+0x1a/0x30
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978631] </TASK>
> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978653] INFO: task
> kworker/7:5:309930 <writer> blocked on an rw-semaphore likely owned by task
> kworker/u32:0:241092 <reader>
>
> I am not completely sure that your patch is responsible.
Thanks for the double check and sorry for the inconvenience.
I’ll recheck the patch and take more stress tests.
Thanks
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: RE: [PATCH] Revert "ceph: when filling trace, call ceph_get_inode outside of mutexes"
2026-04-24 18:54 ` 李磊
@ 2026-04-27 19:55 ` Viacheslav Dubeyko
2026-04-28 5:31 ` 李磊
0 siblings, 1 reply; 9+ messages in thread
From: Viacheslav Dubeyko @ 2026-04-27 19:55 UTC (permalink / raw)
To: lilei24@kuaishou.com
Cc: ceph-devel@vger.kernel.org, sunzhao03@kuaishou.com,
idryomov@gmail.com, slava@dubeyko.com, noctis.akm@gmail.com,
Alex Markuze, linux-kernel@vger.kernel.org
On Fri, 2026-04-24 at 18:54 +0000, 李磊 wrote:
>
> > 2026年4月25日 02:11,Viacheslav Dubeyko <Slava.Dubeyko@ibm.com> 写道:
> >
> > 安全提示:此邮件来自公司外部。除非您确认发件人身份可信且邮件内容不含可疑信息,否则请勿回复或转发邮件、点击邮件链接或打开附件。
> >
> >
> > On Thu, 2026-04-23 at 19:50 +0000, Viacheslav Dubeyko wrote:
> > > On Sat, 2026-04-18 at 12:12 +0800, Li Lei wrote:
> > > > This reverts commit bca9fc14c70fcbbebc84954cc39994e463fb9468.
> > > >
> > > > Deadlock detected between mdsc->snap_rwsem and the I_NEW bit in
> > > > handle_reply().
> > > >
> > > > - kworker/u113:1 (stat inode)
> > > > 1) Hold a inode with I_NEW set
> > > > 2) Request for mdsc->snap_rwsem
> > > > - kworker/u113:2 (readdir)
> > > > 1) Hold mdsc->snap_rwsem
> > > > 2) Wait for inode I_NEW flag to be cleared
> > > >
> > > > task:kworker/u113:1 state:D stack: 0 pid:34454 ppid: 2
> > > > flags:0x00004000
> > > > Workqueue: ceph-msgr ceph_con_workfn [libceph]
> > > > Call Trace:
> > > > __schedule+0x3a9/0x8d0
> > > > schedule+0x49/0xb0
> > > > rwsem_down_write_slowpath+0x30a/0x5e0
> > > > handle_reply+0x4d7/0x7f0 [ceph]
> > > > ? ceph_tcp_recvmsg+0x6f/0xa0 [libceph]
> > > > mds_dispatch+0x10a/0x690 [ceph]
> > > > ? calc_signature+0xdf/0x110 [libceph]
> > > > ? ceph_x_check_message_signature+0x58/0xc0 [libceph]
> > > > ceph_con_process_message+0x73/0x140 [libceph]
> > > > ceph_con_v1_try_read+0x2f2/0x860 [libceph]
> > > > ceph_con_workfn+0x31e/0x660 [libceph]
> > > > process_one_work+0x1cb/0x370
> > > > worker_thread+0x30/0x390
> > > > ? process_one_work+0x370/0x370
> > > > kthread+0x13e/0x160
> > > > ? set_kthread_struct+0x50/0x50
> > > > ret_from_fork+0x1f/0x30
> > > >
> > > > task:kworker/u113:2 state:D stack: 0 pid:54267 ppid: 2
> > > > flags:0x00004000
> > > > Workqueue: ceph-msgr ceph_con_workfn [libceph]
> > > > Call Trace:
> > > > __schedule+0x3a9/0x8d0
> > > > ? bit_wait_io+0x60/0x60
> > > > ? bit_wait_io+0x60/0x60
> > > > schedule+0x49/0xb0
> > > > bit_wait+0xd/0x60
> > > > __wait_on_bit+0x2a/0x90
> > > > ? ceph_force_reconnect+0x90/0x90 [ceph]
> > > > out_of_line_wait_on_bit+0x91/0xb0
> > > > ? bitmap_empty+0x20/0x20
> > > > ilookup5.part.29+0x69/0x90
> > > > ? ceph_force_reconnect+0x90/0x90 [ceph]
> > > > ? ceph_ino_compare+0x30/0x30 [ceph]
> > > > iget5_locked+0x26/0x90
> > > > ceph_get_inode+0x45/0x130 [ceph]
> > > > ceph_readdir_prepopulate+0x59f/0xca0 [ceph]
> > > > handle_reply+0x78d/0x7f0 [ceph]
> > > > ? ceph_tcp_recvmsg+0x6f/0xa0 [libceph]
> > > > mds_dispatch+0x10a/0x690 [ceph]
> > > > ? calc_signature+0xdf/0x110 [libceph]
> > > > ? ceph_x_check_message_signature+0x58/0xc0 [libceph]
> > > > ceph_con_process_message+0x73/0x140 [libceph]
> > > > ceph_con_v1_try_read+0x2f2/0x860 [libceph]
> > > > ceph_con_workfn+0x31e/0x660 [libceph]
> > > > process_one_work+0x1cb/0x370
> > > > worker_thread+0x30/0x390
> > > > ? process_one_work+0x370/0x370
> > > > kthread+0x13e/0x160
> > > > ? set_kthread_struct+0x50/0x50
> > > > ret_from_fork+0x1f/0x30
> > > >
> > > > It's rather rear to be caught, but here's Fast Reproduce Steps
> > > > (multiple mds is needed):
> > > > 1. Try to find 2 different directories (DIR_a DIR_b) in a cephfs cluster
> > > > and make sure they have different auth mds nodes. In this way, a client
> > > > may have chances to run handle_reply on different CPU for our test
> > > > (see step 5 and step 6).
> > > > 2. In DIR_b, create a hard link of DIR_a/FILE_a, namely FILE_b.
> > > > DIR_a/FILE_a and DIR_b/FILE_b have the same ino (123456 e.g)
> > > > 3. Save ino in code below, make it sleep for stat command.
> > > > ```
> > > > static void handle_reply(struct ceph_mds_session
> > > > *session, struct ceph_msg *msg)
> > > > goto out_err;
> > > > }
> > > > req->r_target_inode = in;
> > > > + if (in->i_ino == 123456) {
> > > > + pr_err("inode %lu found, ready to wait 10 seconds.\n",
> > > > + in->i_ino);
> > > > + msleep(10000);
> > > > + }
> > > > ```
> > > > 4. Execute echo 3 > /proc/sys/vm/drop_caches
> > > > 5. In a shell, do `cd DIR_a;stat DIR_a/FILE_a`, we suppose to be stuck on this shell
> > > > because of msleep() in handle_reply().
> > > > 6. In the other shell, do `cd DIR_b;ls DIR_b/` to trigger ceph_readdir_prepopulate()
> > > >
> > > > Repeat step 4-6, less than 10 times is enough to see the problem.
> > > >
> > > > It turns out that commit bca9fc14c70f ("ceph: when filling trace, call ceph_get_inode outside of mutexes")
> > > > moved ceph_inode_get outside snap_rmsem and made a chance for the deadlock of ceph_inode_get()
> > > > and snap_rwsem.
> > > >
> > > > After the following commit, original commit(bca9fc14c70f) can be reverted safely.
> > > > commit 6a92b08fdad2 ("ceph: don't take s_mutex or snap_rwsem in ceph_check_caps")
> > > >
> > > > Signed-off-by: Zhao Sun <sunzhao03@kuaishou.com>
> > > > Signed-off-by: Li Lei <lilei24@kuaishou.com>
> > > > ---
> > > > fs/ceph/inode.c | 26 ++++++++++++++++++++++----
> > > > fs/ceph/mds_client.c | 29 -----------------------------
> > > > 2 files changed, 22 insertions(+), 33 deletions(-)
> > > >
> > > > diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
> > > > index d99e12d..0c241a4 100644
> > > > --- a/fs/ceph/inode.c
> > > > +++ b/fs/ceph/inode.c
> > > > @@ -1667,10 +1667,28 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
> > > > }
> > > >
> > > > if (rinfo->head->is_target) {
> > > > - /* Should be filled in by handle_reply */
> > > > - BUG_ON(!req->r_target_inode);
> > > > + in = xchg(&req->r_new_inode, NULL);
> > > > + tvino.ino = le64_to_cpu(rinfo->targeti.in->ino);
> > > > + tvino.snap = le64_to_cpu(rinfo->targeti.in->snapid);
> > > > +
> > > > + /*
> > > > + * If we ended up opening an existing inode, discard
> > > > + * r_new_inode
> > > > + */
> > > > + if (req->r_op == CEPH_MDS_OP_CREATE &&
> > > > + !req->r_reply_info.has_create_ino) {
> > > > + /* This should never happen on an async create */
> > > > + WARN_ON_ONCE(req->r_deleg_ino);
> > > > + iput(in);
> > > > + in = NULL;
> > > > + }
> > > > +
> > > > + in = ceph_get_inode(fsc->sb, tvino, in);
> > > > + if (IS_ERR(in)) {
> > > > + err = PTR_ERR(in);
> > > > + goto done;
> > > > + }
> > > >
> > > > - in = req->r_target_inode;
> > > > err = ceph_fill_inode(in, req->r_locked_page, &rinfo->targeti,
> > > > NULL, session,
> > > > (!test_bit(CEPH_MDS_R_ABORTED, &req->r_req_flags) &&
> > > > @@ -1680,13 +1698,13 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
> > > > if (err < 0) {
> > > > pr_err_client(cl, "badness %p %llx.%llx\n", in,
> > > > ceph_vinop(in));
> > > > - req->r_target_inode = NULL;
> > > > if (inode_state_read_once(in) & I_NEW)
> > > > discard_new_inode(in);
> > > > else
> > > > iput(in);
> > > > goto done;
> > > > }
> > > > + req->r_target_inode = in;
> > > > if (inode_state_read_once(in) & I_NEW)
> > > > unlock_new_inode(in);
> > > > }
> > > > diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
> > > > index b174627..8a27775 100644
> > > > --- a/fs/ceph/mds_client.c
> > > > +++ b/fs/ceph/mds_client.c
> > > > @@ -3941,36 +3941,7 @@ static void handle_reply(struct ceph_mds_session *session, struct ceph_msg *msg)
> > > > session->s_con.peer_features);
> > > > mutex_unlock(&mdsc->mutex);
> > > >
> > > > - /* Must find target inode outside of mutexes to avoid deadlocks */
> > > > rinfo = &req->r_reply_info;
> > > > - if ((err >= 0) && rinfo->head->is_target) {
> > > > - struct inode *in = xchg(&req->r_new_inode, NULL);
> > > > - struct ceph_vino tvino = {
> > > > - .ino = le64_to_cpu(rinfo->targeti.in->ino),
> > > > - .snap = le64_to_cpu(rinfo->targeti.in->snapid)
> > > > - };
> > > > -
> > > > - /*
> > > > - * If we ended up opening an existing inode, discard
> > > > - * r_new_inode
> > > > - */
> > > > - if (req->r_op == CEPH_MDS_OP_CREATE &&
> > > > - !req->r_reply_info.has_create_ino) {
> > > > - /* This should never happen on an async create */
> > > > - WARN_ON_ONCE(req->r_deleg_ino);
> > > > - iput(in);
> > > > - in = NULL;
> > > > - }
> > > > -
> > > > - in = ceph_get_inode(mdsc->fsc->sb, tvino, in);
> > > > - if (IS_ERR(in)) {
> > > > - err = PTR_ERR(in);
> > > > - mutex_lock(&session->s_mutex);
> > > > - goto out_err;
> > > > - }
> > > > - req->r_target_inode = in;
> > > > - }
> > > > -
> > > > mutex_lock(&session->s_mutex);
> > > > if (err < 0) {
> > > > pr_err_client(cl, "got corrupt reply mds%d(tid:%lld)\n",
> > >
> > > Let me run xfstests for the patch to double check that nothing is broken.
> > >
> > >
> >
> > Unexpectedly, I have a crash on generic/701 with your patch applied. I am going
> > to re-run xfstests without your patch.
> >
> > generic/701 50s ... [failed, exit status 137]_check_dmesg: something found in
> > dmesg (see xfstests-dev/results//generic/701.dmesg)
> > - output mismatch (see xfstests-dev/results//generic/701.out.bad)
> > --- tests/generic/701.out 2026-04-01 19:47:53.878452478 -0700
> > +++ xfstests-dev/results//generic/701.out.bad 2026-04-24
> > 04:57:37.887700953 -0700
> > @@ -1,2 +1,3 @@
> > QA output created by 701
> > -Number of allocated blocks after truncate is in range
> > +pwrite: Permission denied
> > +./check: line 700: 311685 Killed systemd-run --quiet --
> > unit "${unit}" --scope "${cmd[@]}"
> > ...
> > (Run 'diff -u xfstests-dev/tests/generic/701.out xfstests-
> > dev/results//generic/701.out.bad' to see the entire diff)
> >
> > Apr 23 22:42:26 ssdfs-vm-test kernel: [29717.030383] ceph: [2bf3efd7-3e1c-453d-
> > 9665-3c75a0dc96dd 19499]: mds0 caps stale
> > Apr 23 22:43:18 ssdfs-vm-test kernel: [29733.938331] INFO: task
> > kworker/u32:0:241092 blocked for more than 122 seconds.
> > Apr 23 22:43:25 ssdfs-vm-test kernel: [29733.940727] Not tainted 7.0.0+ #9
> > Apr 23 22:43:25 ssdfs-vm-test kernel: [29733.941449] "echo 0 >
> > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> > Apr 23 22:43:25 ssdfs-vm-test kernel: [29733.942756] task:kworker/u32:0
> > state:D stack:0 pid:241092 tgid:241092 ppid:2 task_flags:0x4208060
> > flags:0x00080000
> > Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942773] Workqueue: writeback
> > wb_workfn (flush-ceph-766)
> > Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942792] Call Trace:
> > Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942795] <TASK>
> > Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942803] __schedule+0x5b7/0x1fe0
> > Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942812] ? lock_acquire+0xc8/0x2f0
> > Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942819] ? schedule+0xb1/0x180
> > Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942823] ? find_held_lock+0x31/0x90
> > Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942829] ? schedule+0x10e/0x180
> > Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942834] ? lock_release+0xd9/0x300
> > Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942842] schedule+0x3a/0x180
> > Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942847]
> > schedule_preempt_disabled+0x15/0x30
> > Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942851] __mutex_lock+0x777/0x1220
> > Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942858] ? ceph_con_send+0x4c/0x250
> > Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942876]
> > mutex_lock_nested+0x1b/0x30
> > Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942881] ?
> > mutex_lock_nested+0x1b/0x30
> > Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943136] ceph_con_send+0x4c/0x250
> > Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943146] ? lock_acquire+0xc8/0x2f0
> > Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943158] send_request+0x7d0/0xcc0
> > Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943174] ?
> > do_raw_spin_unlock+0x4e/0xe0
> > Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943182] ?
> > _raw_spin_unlock+0x22/0x50
> > Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943222]
> > __submit_request+0x2a9/0x4c0
> > Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943236]
> > ceph_osdc_start_request+0x6c/0x90
> > Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943245]
> > ceph_writepages_start+0x101b/0x21e0
> > Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943254] ?
> > __lock_acquire+0x4a2/0x2650
> > Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943331] do_writepages+0xc2/0x170
> > Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943338] ? do_writepages+0xc2/0x170
> > Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943342] ? lock_release+0xd9/0x300
> > Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943347] ?
> > writeback_sb_inodes+0x1e7/0x900
> > Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943358]
> > __writeback_single_inode+0x5a/0x5e0
> > Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943365] ?
> > _raw_spin_unlock+0x22/0x50
> > Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943377]
> > writeback_sb_inodes+0x29f/0x900
> > Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943435]
> > __writeback_inodes_wb+0x54/0xf0
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943448] wb_writeback+0x3df/0x470
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943473] wb_workfn+0x364/0x520
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943493]
> > process_one_work+0x22f/0x620
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943515] worker_thread+0x1e2/0x400
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943525] ?
> > __pfx_worker_thread+0x10/0x10
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943532] kthread+0x109/0x140
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943539] ? __pfx_kthread+0x10/0x10
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943548] ret_from_fork+0x3e1/0x470
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943556] ? __pfx_kthread+0x10/0x10
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943561] ? __pfx_kthread+0x10/0x10
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943568]
> > ret_from_fork_asm+0x1a/0x30
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943595] </TASK>
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943669] INFO: task
> > kworker/u32:0:241092 is blocked on a mutex likely owned by task
> > kworker/7:4:308292.
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953769] task:kworker/7:4
> > state:R running task stack:0 pid:308292 tgid:308292 ppid:2
> > task_flags:0x4208860 flags:0x00080000
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953786] Workqueue: ceph-msgr
> > ceph_con_workfn
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953795] Call Trace:
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953798] <TASK>
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953805] __schedule+0x5b7/0x1fe0
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953809] ?
> > irqentry_exit+0x299/0x740
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953843] ?
> > sysvec_apic_timer_interrupt+0x54/0xd0
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953858] preempt_schedule+0x4c/0x80
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953862]
> > preempt_schedule_thunk+0x16/0x30
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953874]
> > _raw_spin_unlock_irq+0x4f/0x70
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953880] evict_folios+0x394/0xf00
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953886] ?
> > hrtimer_start_range_ns+0x270/0x980
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953904] ?
> > lock_is_held_type+0xaa/0x140
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953916]
> > try_to_shrink_lruvec+0x1a9/0x3b0
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953934] shrink_one+0xd1/0x1c0
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953943] shrink_node+0xba8/0x13d0
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953954] ? shrink_node+0xb7e/0x13d0
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953969]
> > do_try_to_free_pages+0xc1/0x4f0
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953974] ?
> > do_try_to_free_pages+0xc1/0x4f0
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953984]
> > try_to_free_pages+0xe0/0x310
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953989] ?
> > fs_reclaim_acquire+0x4f/0x100
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954009]
> > __alloc_frozen_pages_noprof+0xd49/0x1830
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954017] ?
> > ip_dst_mtu_maybe_forward.constprop.0+0x24/0x2a0
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954034] ? lock_acquire+0xc8/0x2f0
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954039] ?
> > __virt_addr_valid+0xf5/0x340
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954053] ?
> > policy_nodemask+0x122/0x1d0
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954064]
> > alloc_pages_mpol+0xba/0x1a0
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954069] ?
> > _copy_from_iter+0x273/0x790
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954080]
> > alloc_pages_noprof+0x59/0xe0
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954087]
> > skb_page_frag_refill+0xc4/0xf0
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954093]
> > sk_page_frag_refill+0x21/0xc0
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954099]
> > tcp_sendmsg_locked+0x848/0x14e0
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954122] tcp_sendmsg+0x30/0x60
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954128] inet_sendmsg+0x46/0x90
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954135] sock_sendmsg+0xf1/0x100
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954142] ? kfree+0x3ab/0x560
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954147] ?
> > __free_frozen_pages+0x6e0/0x730
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954157] ? sg_free_table+0x39/0x90
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954167] do_sendmsg+0x8d/0xe0
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954197]
> > ceph_con_v2_try_write+0x99/0x560
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954206]
> > ceph_con_workfn+0x17f/0x820
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954215]
> > process_one_work+0x22f/0x620
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954221] ?
> > process_one_work+0x254/0x620
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954252] worker_thread+0x1e2/0x400
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954260] ?
> > __pfx_worker_thread+0x10/0x10
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954265] kthread+0x109/0x140
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954271] ? __pfx_kthread+0x10/0x10
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954278] ret_from_fork+0x3e1/0x470
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954283] ? __pfx_kthread+0x10/0x10
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954287] ? __pfx_kthread+0x10/0x10
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954293]
> > ret_from_fork_asm+0x1a/0x30
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954314] </TASK>
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954321] INFO: task
> > kworker/7:2:299107 blocked for more than 122 seconds.
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.955829] Not tainted 7.0.0+ #9
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.959909] "echo 0 >
> > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961590] task:kworker/7:2
> > state:D stack:0 pid:299107 tgid:299107 ppid:2 task_flags:0x4208060
> > flags:0x00080000
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961598] Workqueue: events
> > delayed_work
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961607] Call Trace:
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961610] <TASK>
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961618] __schedule+0x5b7/0x1fe0
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961622] ?
> > mark_held_locks+0x46/0x90
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961629] ? find_held_lock+0x31/0x90
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961634] ? schedule+0x10e/0x180
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961639] ? lock_release+0xd9/0x300
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961649] schedule+0x3a/0x180
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961653]
> > schedule_preempt_disabled+0x15/0x30
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961658]
> > rwsem_down_read_slowpath+0x27b/0x6c0
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961662] ?
> > ceph_send_cap_releases+0x4b/0x370
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961672] ?
> > __queue_delayed_work+0x134/0x190
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961682] down_read+0x7c/0x190
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961687]
> > ceph_send_cap_releases+0x4b/0x370
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961692] ?
> > ceph_con_send+0x1e8/0x250
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961705] delayed_work+0x1d8/0x2b0
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961715]
> > process_one_work+0x22f/0x620
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961730] worker_thread+0x1e2/0x400
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961738] ?
> > __pfx_worker_thread+0x10/0x10
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961744] kthread+0x109/0x140
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961749] ? __pfx_kthread+0x10/0x10
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961756] ret_from_fork+0x3e1/0x470
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961787] ? __pfx_kthread+0x10/0x10
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961791] ? __pfx_kthread+0x10/0x10
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961797]
> > ret_from_fork_asm+0x1a/0x30
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961818] </TASK>
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961851] INFO: task
> > kworker/7:2:299107 <reader> blocked on an rw-semaphore likely owned by task
> > kworker/u32:0:241092 <reader>
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.972571] INFO: task
> > kworker/7:5:309930 blocked for more than 122 seconds.
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.973737] Not tainted 7.0.0+ #9
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.974413] "echo 0 >
> > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978473] task:kworker/7:5
> > state:D stack:0 pid:309930 tgid:309930 ppid:2 task_flags:0x4208060
> > flags:0x00080000
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978479] Workqueue: events
> > handle_osds_timeout
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978485] Call Trace:
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978487] <TASK>
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978493] __schedule+0x5b7/0x1fe0
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978496] ?
> > mark_held_locks+0x46/0x90
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978502] ? find_held_lock+0x31/0x90
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978506] ? schedule+0x10e/0x180
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978509] ? lock_release+0xd9/0x300
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978537] schedule+0x3a/0x180
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978540]
> > schedule_preempt_disabled+0x15/0x30
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978543]
> > rwsem_down_write_slowpath+0x1f3/0x990
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978547] ? lock_acquire+0xc8/0x2f0
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978556] ?
> > process_one_work+0x1ef/0x620
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978563] down_write+0xe4/0xf0
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978567]
> > handle_osds_timeout+0x45/0x160
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978573]
> > process_one_work+0x22f/0x620
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978585] worker_thread+0x1e2/0x400
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978591] ?
> > __pfx_worker_thread+0x10/0x10
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978595] kthread+0x109/0x140
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978599] ? __pfx_kthread+0x10/0x10
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978604] ret_from_fork+0x3e1/0x470
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978608] ? __pfx_kthread+0x10/0x10
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978611] ? __pfx_kthread+0x10/0x10
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978615]
> > ret_from_fork_asm+0x1a/0x30
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978631] </TASK>
> > Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978653] INFO: task
> > kworker/7:5:309930 <writer> blocked on an rw-semaphore likely owned by task
> > kworker/u32:0:241092 <reader>
> >
> > I am not completely sure that your patch is responsible.
>
> Thanks for the double check and sorry for the inconvenience.
> I’ll recheck the patch and take more stress tests.
>
>
As far as I can see, I was able to reproduce the issue without your patch
applied. It looks like I was "lucky" to reproduce the issue. I need to take a
deeper look into the issue. But your patch is not responsible for the issue. Let
me spend some time to analyze the issue reason and environment.
Have you able to reproduce the issue on your side?
Thanks,
Slava.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] Revert "ceph: when filling trace, call ceph_get_inode outside of mutexes"
2026-04-18 4:12 [PATCH] Revert "ceph: when filling trace, call ceph_get_inode outside of mutexes" Li Lei
2026-04-23 19:50 ` Viacheslav Dubeyko
@ 2026-04-27 19:57 ` Viacheslav Dubeyko
1 sibling, 0 replies; 9+ messages in thread
From: Viacheslav Dubeyko @ 2026-04-27 19:57 UTC (permalink / raw)
To: Li Lei, slava, idryomov, amarkuze
Cc: ceph-devel, linux-kernel, noctis.akm, Zhao Sun
On Sat, 2026-04-18 at 12:12 +0800, Li Lei wrote:
> This reverts commit bca9fc14c70fcbbebc84954cc39994e463fb9468.
>
> Deadlock detected between mdsc->snap_rwsem and the I_NEW bit in
> handle_reply().
>
> - kworker/u113:1 (stat inode)
> 1) Hold a inode with I_NEW set
> 2) Request for mdsc->snap_rwsem
> - kworker/u113:2 (readdir)
> 1) Hold mdsc->snap_rwsem
> 2) Wait for inode I_NEW flag to be cleared
>
> task:kworker/u113:1 state:D stack: 0 pid:34454 ppid: 2
> flags:0x00004000
> Workqueue: ceph-msgr ceph_con_workfn [libceph]
> Call Trace:
> __schedule+0x3a9/0x8d0
> schedule+0x49/0xb0
> rwsem_down_write_slowpath+0x30a/0x5e0
> handle_reply+0x4d7/0x7f0 [ceph]
> ? ceph_tcp_recvmsg+0x6f/0xa0 [libceph]
> mds_dispatch+0x10a/0x690 [ceph]
> ? calc_signature+0xdf/0x110 [libceph]
> ? ceph_x_check_message_signature+0x58/0xc0 [libceph]
> ceph_con_process_message+0x73/0x140 [libceph]
> ceph_con_v1_try_read+0x2f2/0x860 [libceph]
> ceph_con_workfn+0x31e/0x660 [libceph]
> process_one_work+0x1cb/0x370
> worker_thread+0x30/0x390
> ? process_one_work+0x370/0x370
> kthread+0x13e/0x160
> ? set_kthread_struct+0x50/0x50
> ret_from_fork+0x1f/0x30
>
> task:kworker/u113:2 state:D stack: 0 pid:54267 ppid: 2
> flags:0x00004000
> Workqueue: ceph-msgr ceph_con_workfn [libceph]
> Call Trace:
> __schedule+0x3a9/0x8d0
> ? bit_wait_io+0x60/0x60
> ? bit_wait_io+0x60/0x60
> schedule+0x49/0xb0
> bit_wait+0xd/0x60
> __wait_on_bit+0x2a/0x90
> ? ceph_force_reconnect+0x90/0x90 [ceph]
> out_of_line_wait_on_bit+0x91/0xb0
> ? bitmap_empty+0x20/0x20
> ilookup5.part.29+0x69/0x90
> ? ceph_force_reconnect+0x90/0x90 [ceph]
> ? ceph_ino_compare+0x30/0x30 [ceph]
> iget5_locked+0x26/0x90
> ceph_get_inode+0x45/0x130 [ceph]
> ceph_readdir_prepopulate+0x59f/0xca0 [ceph]
> handle_reply+0x78d/0x7f0 [ceph]
> ? ceph_tcp_recvmsg+0x6f/0xa0 [libceph]
> mds_dispatch+0x10a/0x690 [ceph]
> ? calc_signature+0xdf/0x110 [libceph]
> ? ceph_x_check_message_signature+0x58/0xc0 [libceph]
> ceph_con_process_message+0x73/0x140 [libceph]
> ceph_con_v1_try_read+0x2f2/0x860 [libceph]
> ceph_con_workfn+0x31e/0x660 [libceph]
> process_one_work+0x1cb/0x370
> worker_thread+0x30/0x390
> ? process_one_work+0x370/0x370
> kthread+0x13e/0x160
> ? set_kthread_struct+0x50/0x50
> ret_from_fork+0x1f/0x30
>
> It's rather rear to be caught, but here's Fast Reproduce Steps
> (multiple mds is needed):
> 1. Try to find 2 different directories (DIR_a DIR_b) in a cephfs cluster
> and make sure they have different auth mds nodes. In this way, a client
> may have chances to run handle_reply on different CPU for our test
> (see step 5 and step 6).
> 2. In DIR_b, create a hard link of DIR_a/FILE_a, namely FILE_b.
> DIR_a/FILE_a and DIR_b/FILE_b have the same ino (123456 e.g)
> 3. Save ino in code below, make it sleep for stat command.
> ```
> static void handle_reply(struct ceph_mds_session
> *session, struct ceph_msg *msg)
> goto out_err;
> }
> req->r_target_inode = in;
> + if (in->i_ino == 123456) {
> + pr_err("inode %lu found, ready to wait 10 seconds.\n",
> + in->i_ino);
> + msleep(10000);
> + }
> ```
> 4. Execute echo 3 > /proc/sys/vm/drop_caches
> 5. In a shell, do `cd DIR_a;stat DIR_a/FILE_a`, we suppose to be stuck on this shell
> because of msleep() in handle_reply().
> 6. In the other shell, do `cd DIR_b;ls DIR_b/` to trigger ceph_readdir_prepopulate()
>
> Repeat step 4-6, less than 10 times is enough to see the problem.
>
> It turns out that commit bca9fc14c70f ("ceph: when filling trace, call ceph_get_inode outside of mutexes")
> moved ceph_inode_get outside snap_rmsem and made a chance for the deadlock of ceph_inode_get()
> and snap_rwsem.
>
> After the following commit, original commit(bca9fc14c70f) can be reverted safely.
> commit 6a92b08fdad2 ("ceph: don't take s_mutex or snap_rwsem in ceph_check_caps")
>
> Signed-off-by: Zhao Sun <sunzhao03@kuaishou.com>
> Signed-off-by: Li Lei <lilei24@kuaishou.com>
> ---
> fs/ceph/inode.c | 26 ++++++++++++++++++++++----
> fs/ceph/mds_client.c | 29 -----------------------------
> 2 files changed, 22 insertions(+), 33 deletions(-)
>
> diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
> index d99e12d..0c241a4 100644
> --- a/fs/ceph/inode.c
> +++ b/fs/ceph/inode.c
> @@ -1667,10 +1667,28 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
> }
>
> if (rinfo->head->is_target) {
> - /* Should be filled in by handle_reply */
> - BUG_ON(!req->r_target_inode);
> + in = xchg(&req->r_new_inode, NULL);
> + tvino.ino = le64_to_cpu(rinfo->targeti.in->ino);
> + tvino.snap = le64_to_cpu(rinfo->targeti.in->snapid);
> +
> + /*
> + * If we ended up opening an existing inode, discard
> + * r_new_inode
> + */
> + if (req->r_op == CEPH_MDS_OP_CREATE &&
> + !req->r_reply_info.has_create_ino) {
> + /* This should never happen on an async create */
> + WARN_ON_ONCE(req->r_deleg_ino);
> + iput(in);
> + in = NULL;
> + }
> +
> + in = ceph_get_inode(fsc->sb, tvino, in);
> + if (IS_ERR(in)) {
> + err = PTR_ERR(in);
> + goto done;
> + }
>
> - in = req->r_target_inode;
> err = ceph_fill_inode(in, req->r_locked_page, &rinfo->targeti,
> NULL, session,
> (!test_bit(CEPH_MDS_R_ABORTED, &req->r_req_flags) &&
> @@ -1680,13 +1698,13 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
> if (err < 0) {
> pr_err_client(cl, "badness %p %llx.%llx\n", in,
> ceph_vinop(in));
> - req->r_target_inode = NULL;
> if (inode_state_read_once(in) & I_NEW)
> discard_new_inode(in);
> else
> iput(in);
> goto done;
> }
> + req->r_target_inode = in;
> if (inode_state_read_once(in) & I_NEW)
> unlock_new_inode(in);
> }
> diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
> index b174627..8a27775 100644
> --- a/fs/ceph/mds_client.c
> +++ b/fs/ceph/mds_client.c
> @@ -3941,36 +3941,7 @@ static void handle_reply(struct ceph_mds_session *session, struct ceph_msg *msg)
> session->s_con.peer_features);
> mutex_unlock(&mdsc->mutex);
>
> - /* Must find target inode outside of mutexes to avoid deadlocks */
> rinfo = &req->r_reply_info;
> - if ((err >= 0) && rinfo->head->is_target) {
> - struct inode *in = xchg(&req->r_new_inode, NULL);
> - struct ceph_vino tvino = {
> - .ino = le64_to_cpu(rinfo->targeti.in->ino),
> - .snap = le64_to_cpu(rinfo->targeti.in->snapid)
> - };
> -
> - /*
> - * If we ended up opening an existing inode, discard
> - * r_new_inode
> - */
> - if (req->r_op == CEPH_MDS_OP_CREATE &&
> - !req->r_reply_info.has_create_ino) {
> - /* This should never happen on an async create */
> - WARN_ON_ONCE(req->r_deleg_ino);
> - iput(in);
> - in = NULL;
> - }
> -
> - in = ceph_get_inode(mdsc->fsc->sb, tvino, in);
> - if (IS_ERR(in)) {
> - err = PTR_ERR(in);
> - mutex_lock(&session->s_mutex);
> - goto out_err;
> - }
> - req->r_target_inode = in;
> - }
> -
> mutex_lock(&session->s_mutex);
> if (err < 0) {
> pr_err_client(cl, "got corrupt reply mds%d(tid:%lld)\n",
Reviewed-by: Viacheslav Dubeyko <Slava.Dubeyko@ibm.com>
Thanks,
Slava.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Re: RE: [PATCH] Revert "ceph: when filling trace, call ceph_get_inode outside of mutexes"
2026-04-27 19:55 ` Viacheslav Dubeyko
@ 2026-04-28 5:31 ` 李磊
2026-04-28 17:04 ` Viacheslav Dubeyko
0 siblings, 1 reply; 9+ messages in thread
From: 李磊 @ 2026-04-28 5:31 UTC (permalink / raw)
To: Viacheslav Dubeyko
Cc: 李磊, ceph-devel@vger.kernel.org, 孙朝,
idryomov@gmail.com, slava@dubeyko.com, noctis.akm@gmail.com,
Alex Markuze, linux-kernel@vger.kernel.org
> 2026年4月28日 03:55,Viacheslav Dubeyko <Slava.Dubeyko@ibm.com> 写道:
>
> 安全提示:此邮件来自公司外部。除非您确认发件人身份可信且邮件内容不含可疑信息,否则请勿回复或转发邮件、点击邮件链接或打开附件。
>
>
> On Fri, 2026-04-24 at 18:54 +0000, 李磊 wrote:
>>
>>> 2026年4月25日 02:11,Viacheslav Dubeyko <Slava.Dubeyko@ibm.com> 写道:
>>>
>>> 安全提示:此邮件来自公司外部。除非您确认发件人身份可信且邮件内容不含可疑信息,否则请勿回复或转发邮件、点击邮件链接或打开附件。
>>>
>>>
>>> On Thu, 2026-04-23 at 19:50 +0000, Viacheslav Dubeyko wrote:
>>>> On Sat, 2026-04-18 at 12:12 +0800, Li Lei wrote:
>>>>> This reverts commit bca9fc14c70fcbbebc84954cc39994e463fb9468.
>>>>>
>>>>> Deadlock detected between mdsc->snap_rwsem and the I_NEW bit in
>>>>> handle_reply().
>>>>>
>>>>> - kworker/u113:1 (stat inode)
>>>>> 1) Hold a inode with I_NEW set
>>>>> 2) Request for mdsc->snap_rwsem
>>>>> - kworker/u113:2 (readdir)
>>>>> 1) Hold mdsc->snap_rwsem
>>>>> 2) Wait for inode I_NEW flag to be cleared
>>>>>
>>>>> task:kworker/u113:1 state:D stack: 0 pid:34454 ppid: 2
>>>>> flags:0x00004000
>>>>> Workqueue: ceph-msgr ceph_con_workfn [libceph]
>>>>> Call Trace:
>>>>> __schedule+0x3a9/0x8d0
>>>>> schedule+0x49/0xb0
>>>>> rwsem_down_write_slowpath+0x30a/0x5e0
>>>>> handle_reply+0x4d7/0x7f0 [ceph]
>>>>> ? ceph_tcp_recvmsg+0x6f/0xa0 [libceph]
>>>>> mds_dispatch+0x10a/0x690 [ceph]
>>>>> ? calc_signature+0xdf/0x110 [libceph]
>>>>> ? ceph_x_check_message_signature+0x58/0xc0 [libceph]
>>>>> ceph_con_process_message+0x73/0x140 [libceph]
>>>>> ceph_con_v1_try_read+0x2f2/0x860 [libceph]
>>>>> ceph_con_workfn+0x31e/0x660 [libceph]
>>>>> process_one_work+0x1cb/0x370
>>>>> worker_thread+0x30/0x390
>>>>> ? process_one_work+0x370/0x370
>>>>> kthread+0x13e/0x160
>>>>> ? set_kthread_struct+0x50/0x50
>>>>> ret_from_fork+0x1f/0x30
>>>>>
>>>>> task:kworker/u113:2 state:D stack: 0 pid:54267 ppid: 2
>>>>> flags:0x00004000
>>>>> Workqueue: ceph-msgr ceph_con_workfn [libceph]
>>>>> Call Trace:
>>>>> __schedule+0x3a9/0x8d0
>>>>> ? bit_wait_io+0x60/0x60
>>>>> ? bit_wait_io+0x60/0x60
>>>>> schedule+0x49/0xb0
>>>>> bit_wait+0xd/0x60
>>>>> __wait_on_bit+0x2a/0x90
>>>>> ? ceph_force_reconnect+0x90/0x90 [ceph]
>>>>> out_of_line_wait_on_bit+0x91/0xb0
>>>>> ? bitmap_empty+0x20/0x20
>>>>> ilookup5.part.29+0x69/0x90
>>>>> ? ceph_force_reconnect+0x90/0x90 [ceph]
>>>>> ? ceph_ino_compare+0x30/0x30 [ceph]
>>>>> iget5_locked+0x26/0x90
>>>>> ceph_get_inode+0x45/0x130 [ceph]
>>>>> ceph_readdir_prepopulate+0x59f/0xca0 [ceph]
>>>>> handle_reply+0x78d/0x7f0 [ceph]
>>>>> ? ceph_tcp_recvmsg+0x6f/0xa0 [libceph]
>>>>> mds_dispatch+0x10a/0x690 [ceph]
>>>>> ? calc_signature+0xdf/0x110 [libceph]
>>>>> ? ceph_x_check_message_signature+0x58/0xc0 [libceph]
>>>>> ceph_con_process_message+0x73/0x140 [libceph]
>>>>> ceph_con_v1_try_read+0x2f2/0x860 [libceph]
>>>>> ceph_con_workfn+0x31e/0x660 [libceph]
>>>>> process_one_work+0x1cb/0x370
>>>>> worker_thread+0x30/0x390
>>>>> ? process_one_work+0x370/0x370
>>>>> kthread+0x13e/0x160
>>>>> ? set_kthread_struct+0x50/0x50
>>>>> ret_from_fork+0x1f/0x30
>>>>>
>>>>> It's rather rear to be caught, but here's Fast Reproduce Steps
>>>>> (multiple mds is needed):
>>>>> 1. Try to find 2 different directories (DIR_a DIR_b) in a cephfs cluster
>>>>> and make sure they have different auth mds nodes. In this way, a client
>>>>> may have chances to run handle_reply on different CPU for our test
>>>>> (see step 5 and step 6).
>>>>> 2. In DIR_b, create a hard link of DIR_a/FILE_a, namely FILE_b.
>>>>> DIR_a/FILE_a and DIR_b/FILE_b have the same ino (123456 e.g)
>>>>> 3. Save ino in code below, make it sleep for stat command.
>>>>> ```
>>>>> static void handle_reply(struct ceph_mds_session
>>>>> *session, struct ceph_msg *msg)
>>>>> goto out_err;
>>>>> }
>>>>> req->r_target_inode = in;
>>>>> + if (in->i_ino == 123456) {
>>>>> + pr_err("inode %lu found, ready to wait 10 seconds.\n",
>>>>> + in->i_ino);
>>>>> + msleep(10000);
>>>>> + }
>>>>> ```
>>>>> 4. Execute echo 3 > /proc/sys/vm/drop_caches
>>>>> 5. In a shell, do `cd DIR_a;stat DIR_a/FILE_a`, we suppose to be stuck on this shell
>>>>> because of msleep() in handle_reply().
>>>>> 6. In the other shell, do `cd DIR_b;ls DIR_b/` to trigger ceph_readdir_prepopulate()
>>>>>
>>>>> Repeat step 4-6, less than 10 times is enough to see the problem.
>>>>>
>>>>> It turns out that commit bca9fc14c70f ("ceph: when filling trace, call ceph_get_inode outside of mutexes")
>>>>> moved ceph_inode_get outside snap_rmsem and made a chance for the deadlock of ceph_inode_get()
>>>>> and snap_rwsem.
>>>>>
>>>>> After the following commit, original commit(bca9fc14c70f) can be reverted safely.
>>>>> commit 6a92b08fdad2 ("ceph: don't take s_mutex or snap_rwsem in ceph_check_caps")
>>>>>
>>>>> Signed-off-by: Zhao Sun <sunzhao03@kuaishou.com>
>>>>> Signed-off-by: Li Lei <lilei24@kuaishou.com>
>>>>> ---
>>>>> fs/ceph/inode.c | 26 ++++++++++++++++++++++----
>>>>> fs/ceph/mds_client.c | 29 -----------------------------
>>>>> 2 files changed, 22 insertions(+), 33 deletions(-)
>>>>>
>>>>> diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
>>>>> index d99e12d..0c241a4 100644
>>>>> --- a/fs/ceph/inode.c
>>>>> +++ b/fs/ceph/inode.c
>>>>> @@ -1667,10 +1667,28 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
>>>>> }
>>>>>
>>>>> if (rinfo->head->is_target) {
>>>>> - /* Should be filled in by handle_reply */
>>>>> - BUG_ON(!req->r_target_inode);
>>>>> + in = xchg(&req->r_new_inode, NULL);
>>>>> + tvino.ino = le64_to_cpu(rinfo->targeti.in->ino);
>>>>> + tvino.snap = le64_to_cpu(rinfo->targeti.in->snapid);
>>>>> +
>>>>> + /*
>>>>> + * If we ended up opening an existing inode, discard
>>>>> + * r_new_inode
>>>>> + */
>>>>> + if (req->r_op == CEPH_MDS_OP_CREATE &&
>>>>> + !req->r_reply_info.has_create_ino) {
>>>>> + /* This should never happen on an async create */
>>>>> + WARN_ON_ONCE(req->r_deleg_ino);
>>>>> + iput(in);
>>>>> + in = NULL;
>>>>> + }
>>>>> +
>>>>> + in = ceph_get_inode(fsc->sb, tvino, in);
>>>>> + if (IS_ERR(in)) {
>>>>> + err = PTR_ERR(in);
>>>>> + goto done;
>>>>> + }
>>>>>
>>>>> - in = req->r_target_inode;
>>>>> err = ceph_fill_inode(in, req->r_locked_page, &rinfo->targeti,
>>>>> NULL, session,
>>>>> (!test_bit(CEPH_MDS_R_ABORTED, &req->r_req_flags) &&
>>>>> @@ -1680,13 +1698,13 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
>>>>> if (err < 0) {
>>>>> pr_err_client(cl, "badness %p %llx.%llx\n", in,
>>>>> ceph_vinop(in));
>>>>> - req->r_target_inode = NULL;
>>>>> if (inode_state_read_once(in) & I_NEW)
>>>>> discard_new_inode(in);
>>>>> else
>>>>> iput(in);
>>>>> goto done;
>>>>> }
>>>>> + req->r_target_inode = in;
>>>>> if (inode_state_read_once(in) & I_NEW)
>>>>> unlock_new_inode(in);
>>>>> }
>>>>> diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
>>>>> index b174627..8a27775 100644
>>>>> --- a/fs/ceph/mds_client.c
>>>>> +++ b/fs/ceph/mds_client.c
>>>>> @@ -3941,36 +3941,7 @@ static void handle_reply(struct ceph_mds_session *session, struct ceph_msg *msg)
>>>>> session->s_con.peer_features);
>>>>> mutex_unlock(&mdsc->mutex);
>>>>>
>>>>> - /* Must find target inode outside of mutexes to avoid deadlocks */
>>>>> rinfo = &req->r_reply_info;
>>>>> - if ((err >= 0) && rinfo->head->is_target) {
>>>>> - struct inode *in = xchg(&req->r_new_inode, NULL);
>>>>> - struct ceph_vino tvino = {
>>>>> - .ino = le64_to_cpu(rinfo->targeti.in->ino),
>>>>> - .snap = le64_to_cpu(rinfo->targeti.in->snapid)
>>>>> - };
>>>>> -
>>>>> - /*
>>>>> - * If we ended up opening an existing inode, discard
>>>>> - * r_new_inode
>>>>> - */
>>>>> - if (req->r_op == CEPH_MDS_OP_CREATE &&
>>>>> - !req->r_reply_info.has_create_ino) {
>>>>> - /* This should never happen on an async create */
>>>>> - WARN_ON_ONCE(req->r_deleg_ino);
>>>>> - iput(in);
>>>>> - in = NULL;
>>>>> - }
>>>>> -
>>>>> - in = ceph_get_inode(mdsc->fsc->sb, tvino, in);
>>>>> - if (IS_ERR(in)) {
>>>>> - err = PTR_ERR(in);
>>>>> - mutex_lock(&session->s_mutex);
>>>>> - goto out_err;
>>>>> - }
>>>>> - req->r_target_inode = in;
>>>>> - }
>>>>> -
>>>>> mutex_lock(&session->s_mutex);
>>>>> if (err < 0) {
>>>>> pr_err_client(cl, "got corrupt reply mds%d(tid:%lld)\n",
>>>>
>>>> Let me run xfstests for the patch to double check that nothing is broken.
>>>>
>>>>
>>>
>>> Unexpectedly, I have a crash on generic/701 with your patch applied. I am going
>>> to re-run xfstests without your patch.
>>>
>>> generic/701 50s ... [failed, exit status 137]_check_dmesg: something found in
>>> dmesg (see xfstests-dev/results//generic/701.dmesg)
>>> - output mismatch (see xfstests-dev/results//generic/701.out.bad)
>>> --- tests/generic/701.out 2026-04-01 19:47:53.878452478 -0700
>>> +++ xfstests-dev/results//generic/701.out.bad 2026-04-24
>>> 04:57:37.887700953 -0700
>>> @@ -1,2 +1,3 @@
>>> QA output created by 701
>>> -Number of allocated blocks after truncate is in range
>>> +pwrite: Permission denied
>>> +./check: line 700: 311685 Killed systemd-run --quiet --
>>> unit "${unit}" --scope "${cmd[@]}"
>>> ...
>>> (Run 'diff -u xfstests-dev/tests/generic/701.out xfstests-
>>> dev/results//generic/701.out.bad' to see the entire diff)
>>>
>>> Apr 23 22:42:26 ssdfs-vm-test kernel: [29717.030383] ceph: [2bf3efd7-3e1c-453d-
>>> 9665-3c75a0dc96dd 19499]: mds0 caps stale
>>> Apr 23 22:43:18 ssdfs-vm-test kernel: [29733.938331] INFO: task
>>> kworker/u32:0:241092 blocked for more than 122 seconds.
>>> Apr 23 22:43:25 ssdfs-vm-test kernel: [29733.940727] Not tainted 7.0.0+ #9
>>> Apr 23 22:43:25 ssdfs-vm-test kernel: [29733.941449] "echo 0 >
>>> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> Apr 23 22:43:25 ssdfs-vm-test kernel: [29733.942756] task:kworker/u32:0
>>> state:D stack:0 pid:241092 tgid:241092 ppid:2 task_flags:0x4208060
>>> flags:0x00080000
>>> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942773] Workqueue: writeback
>>> wb_workfn (flush-ceph-766)
>>> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942792] Call Trace:
>>> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942795] <TASK>
>>> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942803] __schedule+0x5b7/0x1fe0
>>> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942812] ? lock_acquire+0xc8/0x2f0
>>> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942819] ? schedule+0xb1/0x180
>>> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942823] ? find_held_lock+0x31/0x90
>>> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942829] ? schedule+0x10e/0x180
>>> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942834] ? lock_release+0xd9/0x300
>>> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942842] schedule+0x3a/0x180
>>> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942847]
>>> schedule_preempt_disabled+0x15/0x30
>>> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942851] __mutex_lock+0x777/0x1220
>>> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942858] ? ceph_con_send+0x4c/0x250
>>> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942876]
>>> mutex_lock_nested+0x1b/0x30
>>> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.942881] ?
>>> mutex_lock_nested+0x1b/0x30
>>> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943136] ceph_con_send+0x4c/0x250
>>> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943146] ? lock_acquire+0xc8/0x2f0
>>> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943158] send_request+0x7d0/0xcc0
>>> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943174] ?
>>> do_raw_spin_unlock+0x4e/0xe0
>>> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943182] ?
>>> _raw_spin_unlock+0x22/0x50
>>> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943222]
>>> __submit_request+0x2a9/0x4c0
>>> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943236]
>>> ceph_osdc_start_request+0x6c/0x90
>>> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943245]
>>> ceph_writepages_start+0x101b/0x21e0
>>> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943254] ?
>>> __lock_acquire+0x4a2/0x2650
>>> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943331] do_writepages+0xc2/0x170
>>> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943338] ? do_writepages+0xc2/0x170
>>> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943342] ? lock_release+0xd9/0x300
>>> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943347] ?
>>> writeback_sb_inodes+0x1e7/0x900
>>> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943358]
>>> __writeback_single_inode+0x5a/0x5e0
>>> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943365] ?
>>> _raw_spin_unlock+0x22/0x50
>>> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943377]
>>> writeback_sb_inodes+0x29f/0x900
>>> Apr 23 22:43:26 ssdfs-vm-test kernel: [29733.943435]
>>> __writeback_inodes_wb+0x54/0xf0
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943448] wb_writeback+0x3df/0x470
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943473] wb_workfn+0x364/0x520
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943493]
>>> process_one_work+0x22f/0x620
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943515] worker_thread+0x1e2/0x400
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943525] ?
>>> __pfx_worker_thread+0x10/0x10
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943532] kthread+0x109/0x140
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943539] ? __pfx_kthread+0x10/0x10
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943548] ret_from_fork+0x3e1/0x470
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943556] ? __pfx_kthread+0x10/0x10
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943561] ? __pfx_kthread+0x10/0x10
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943568]
>>> ret_from_fork_asm+0x1a/0x30
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943595] </TASK>
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.943669] INFO: task
>>> kworker/u32:0:241092 is blocked on a mutex likely owned by task
>>> kworker/7:4:308292.
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953769] task:kworker/7:4
>>> state:R running task stack:0 pid:308292 tgid:308292 ppid:2
>>> task_flags:0x4208860 flags:0x00080000
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953786] Workqueue: ceph-msgr
>>> ceph_con_workfn
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953795] Call Trace:
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953798] <TASK>
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953805] __schedule+0x5b7/0x1fe0
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953809] ?
>>> irqentry_exit+0x299/0x740
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953843] ?
>>> sysvec_apic_timer_interrupt+0x54/0xd0
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953858] preempt_schedule+0x4c/0x80
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953862]
>>> preempt_schedule_thunk+0x16/0x30
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953874]
>>> _raw_spin_unlock_irq+0x4f/0x70
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953880] evict_folios+0x394/0xf00
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953886] ?
>>> hrtimer_start_range_ns+0x270/0x980
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953904] ?
>>> lock_is_held_type+0xaa/0x140
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953916]
>>> try_to_shrink_lruvec+0x1a9/0x3b0
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953934] shrink_one+0xd1/0x1c0
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953943] shrink_node+0xba8/0x13d0
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953954] ? shrink_node+0xb7e/0x13d0
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953969]
>>> do_try_to_free_pages+0xc1/0x4f0
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953974] ?
>>> do_try_to_free_pages+0xc1/0x4f0
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953984]
>>> try_to_free_pages+0xe0/0x310
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.953989] ?
>>> fs_reclaim_acquire+0x4f/0x100
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954009]
>>> __alloc_frozen_pages_noprof+0xd49/0x1830
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954017] ?
>>> ip_dst_mtu_maybe_forward.constprop.0+0x24/0x2a0
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954034] ? lock_acquire+0xc8/0x2f0
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954039] ?
>>> __virt_addr_valid+0xf5/0x340
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954053] ?
>>> policy_nodemask+0x122/0x1d0
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954064]
>>> alloc_pages_mpol+0xba/0x1a0
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954069] ?
>>> _copy_from_iter+0x273/0x790
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954080]
>>> alloc_pages_noprof+0x59/0xe0
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954087]
>>> skb_page_frag_refill+0xc4/0xf0
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954093]
>>> sk_page_frag_refill+0x21/0xc0
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954099]
>>> tcp_sendmsg_locked+0x848/0x14e0
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954122] tcp_sendmsg+0x30/0x60
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954128] inet_sendmsg+0x46/0x90
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954135] sock_sendmsg+0xf1/0x100
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954142] ? kfree+0x3ab/0x560
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954147] ?
>>> __free_frozen_pages+0x6e0/0x730
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954157] ? sg_free_table+0x39/0x90
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954167] do_sendmsg+0x8d/0xe0
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954197]
>>> ceph_con_v2_try_write+0x99/0x560
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954206]
>>> ceph_con_workfn+0x17f/0x820
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954215]
>>> process_one_work+0x22f/0x620
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954221] ?
>>> process_one_work+0x254/0x620
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954252] worker_thread+0x1e2/0x400
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954260] ?
>>> __pfx_worker_thread+0x10/0x10
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954265] kthread+0x109/0x140
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954271] ? __pfx_kthread+0x10/0x10
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954278] ret_from_fork+0x3e1/0x470
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954283] ? __pfx_kthread+0x10/0x10
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954287] ? __pfx_kthread+0x10/0x10
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954293]
>>> ret_from_fork_asm+0x1a/0x30
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954314] </TASK>
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.954321] INFO: task
>>> kworker/7:2:299107 blocked for more than 122 seconds.
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.955829] Not tainted 7.0.0+ #9
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.959909] "echo 0 >
>>> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961590] task:kworker/7:2
>>> state:D stack:0 pid:299107 tgid:299107 ppid:2 task_flags:0x4208060
>>> flags:0x00080000
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961598] Workqueue: events
>>> delayed_work
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961607] Call Trace:
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961610] <TASK>
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961618] __schedule+0x5b7/0x1fe0
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961622] ?
>>> mark_held_locks+0x46/0x90
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961629] ? find_held_lock+0x31/0x90
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961634] ? schedule+0x10e/0x180
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961639] ? lock_release+0xd9/0x300
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961649] schedule+0x3a/0x180
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961653]
>>> schedule_preempt_disabled+0x15/0x30
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961658]
>>> rwsem_down_read_slowpath+0x27b/0x6c0
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961662] ?
>>> ceph_send_cap_releases+0x4b/0x370
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961672] ?
>>> __queue_delayed_work+0x134/0x190
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961682] down_read+0x7c/0x190
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961687]
>>> ceph_send_cap_releases+0x4b/0x370
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961692] ?
>>> ceph_con_send+0x1e8/0x250
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961705] delayed_work+0x1d8/0x2b0
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961715]
>>> process_one_work+0x22f/0x620
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961730] worker_thread+0x1e2/0x400
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961738] ?
>>> __pfx_worker_thread+0x10/0x10
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961744] kthread+0x109/0x140
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961749] ? __pfx_kthread+0x10/0x10
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961756] ret_from_fork+0x3e1/0x470
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961787] ? __pfx_kthread+0x10/0x10
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961791] ? __pfx_kthread+0x10/0x10
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961797]
>>> ret_from_fork_asm+0x1a/0x30
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961818] </TASK>
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.961851] INFO: task
>>> kworker/7:2:299107 <reader> blocked on an rw-semaphore likely owned by task
>>> kworker/u32:0:241092 <reader>
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.972571] INFO: task
>>> kworker/7:5:309930 blocked for more than 122 seconds.
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.973737] Not tainted 7.0.0+ #9
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.974413] "echo 0 >
>>> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978473] task:kworker/7:5
>>> state:D stack:0 pid:309930 tgid:309930 ppid:2 task_flags:0x4208060
>>> flags:0x00080000
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978479] Workqueue: events
>>> handle_osds_timeout
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978485] Call Trace:
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978487] <TASK>
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978493] __schedule+0x5b7/0x1fe0
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978496] ?
>>> mark_held_locks+0x46/0x90
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978502] ? find_held_lock+0x31/0x90
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978506] ? schedule+0x10e/0x180
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978509] ? lock_release+0xd9/0x300
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978537] schedule+0x3a/0x180
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978540]
>>> schedule_preempt_disabled+0x15/0x30
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978543]
>>> rwsem_down_write_slowpath+0x1f3/0x990
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978547] ? lock_acquire+0xc8/0x2f0
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978556] ?
>>> process_one_work+0x1ef/0x620
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978563] down_write+0xe4/0xf0
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978567]
>>> handle_osds_timeout+0x45/0x160
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978573]
>>> process_one_work+0x22f/0x620
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978585] worker_thread+0x1e2/0x400
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978591] ?
>>> __pfx_worker_thread+0x10/0x10
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978595] kthread+0x109/0x140
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978599] ? __pfx_kthread+0x10/0x10
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978604] ret_from_fork+0x3e1/0x470
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978608] ? __pfx_kthread+0x10/0x10
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978611] ? __pfx_kthread+0x10/0x10
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978615]
>>> ret_from_fork_asm+0x1a/0x30
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978631] </TASK>
>>> Apr 23 22:43:30 ssdfs-vm-test kernel: [29733.978653] INFO: task
>>> kworker/7:5:309930 <writer> blocked on an rw-semaphore likely owned by task
>>> kworker/u32:0:241092 <reader>
>>>
>>> I am not completely sure that your patch is responsible.
>>
>> Thanks for the double check and sorry for the inconvenience.
>> I’ll recheck the patch and take more stress tests.
>>
>>
>
> As far as I can see, I was able to reproduce the issue without your patch
> applied. It looks like I was "lucky" to reproduce the issue. I need to take a
> deeper look into the issue. But your patch is not responsible for the issue. Let
> me spend some time to analyze the issue reason and environment.
>
> Have you able to reproduce the issue on your side?
I’ve tried generic/701 again for several times, but no issues occurred.
Judging only from the stack trace, I believe it’s a deadlock between
writeback and memory reclaiming. This dependency chain is as follows
1. Several D threads are waiting for the osdc->lock to be released.
2. The osdc->lock is held by kworker/u32:0:241092 which is performing writeback,
and kworker/u32:0:241092 is asking for con->mutex to send_request.
3. The con->mutex is held by kworker/7:4:308292 which is currently calling do_sendmsg()
Because of memory shortage, step 3 has to reclaim memory and wait for step 2 to writeback
And free some folios. But step 2 is blocked by con->mutex which is already held by step3.
Thanks,
Li
^ permalink raw reply [flat|nested] 9+ messages in thread
* RE: Re: RE: [PATCH] Revert "ceph: when filling trace, call ceph_get_inode outside of mutexes"
2026-04-28 5:31 ` 李磊
@ 2026-04-28 17:04 ` Viacheslav Dubeyko
2026-04-29 17:39 ` Viacheslav Dubeyko
0 siblings, 1 reply; 9+ messages in thread
From: Viacheslav Dubeyko @ 2026-04-28 17:04 UTC (permalink / raw)
To: lilei24@kuaishou.com
Cc: Alex Markuze, ceph-devel@vger.kernel.org, sunzhao03@kuaishou.com,
slava@dubeyko.com, idryomov@gmail.com, noctis.akm@gmail.com,
linux-kernel@vger.kernel.org
On Tue, 2026-04-28 at 05:31 +0000, 李磊 wrote:
>
> > >
<skipped>
> >
> > As far as I can see, I was able to reproduce the issue without your patch
> > applied. It looks like I was "lucky" to reproduce the issue. I need to take a
> > deeper look into the issue. But your patch is not responsible for the issue. Let
> > me spend some time to analyze the issue reason and environment.
> >
> > Have you able to reproduce the issue on your side?
>
> I’ve tried generic/701 again for several times, but no issues occurred.
>
I assume that running only generic/701 test-case alone cannot reproduce the
issue. My understanding is that the whole auto group needs to be executed.
Somehow, I was lucky enough to reproduce the issue. :)
> Judging only from the stack trace, I believe it’s a deadlock between
> writeback and memory reclaiming. This dependency chain is as follows
>
> 1. Several D threads are waiting for the osdc->lock to be released.
> 2. The osdc->lock is held by kworker/u32:0:241092 which is performing writeback,
> and kworker/u32:0:241092 is asking for con->mutex to send_request.
> 3. The con->mutex is held by kworker/7:4:308292 which is currently calling do_sendmsg()
>
> Because of memory shortage, step 3 has to reclaim memory and wait for step 2 to writeback
> And free some folios. But step 2 is blocked by con->mutex which is already held by step3.
>
I am going to take a deeper look into the issue today. But, probably, one day
could be not enough. :)
Thanks,
Slava.
^ permalink raw reply [flat|nested] 9+ messages in thread
* RE: Re: RE: [PATCH] Revert "ceph: when filling trace, call ceph_get_inode outside of mutexes"
2026-04-28 17:04 ` Viacheslav Dubeyko
@ 2026-04-29 17:39 ` Viacheslav Dubeyko
0 siblings, 0 replies; 9+ messages in thread
From: Viacheslav Dubeyko @ 2026-04-29 17:39 UTC (permalink / raw)
To: lilei24@kuaishou.com
Cc: idryomov@gmail.com, Alex Markuze, ceph-devel@vger.kernel.org,
slava@dubeyko.com, sunzhao03@kuaishou.com, noctis.akm@gmail.com,
linux-kernel@vger.kernel.org
On Tue, 2026-04-28 at 17:04 +0000, Viacheslav Dubeyko wrote:
> On Tue, 2026-04-28 at 05:31 +0000, 李磊 wrote:
> >
> > > >
>
> <skipped>
>
> > >
> > > As far as I can see, I was able to reproduce the issue without your patch
> > > applied. It looks like I was "lucky" to reproduce the issue. I need to take a
> > > deeper look into the issue. But your patch is not responsible for the issue. Let
> > > me spend some time to analyze the issue reason and environment.
> > >
> > > Have you able to reproduce the issue on your side?
> >
> > I’ve tried generic/701 again for several times, but no issues occurred.
> >
>
> I assume that running only generic/701 test-case alone cannot reproduce the
> issue. My understanding is that the whole auto group needs to be executed.
> Somehow, I was lucky enough to reproduce the issue. :)
>
> > Judging only from the stack trace, I believe it’s a deadlock between
> > writeback and memory reclaiming. This dependency chain is as follows
> >
> > 1. Several D threads are waiting for the osdc->lock to be released.
> > 2. The osdc->lock is held by kworker/u32:0:241092 which is performing writeback,
> > and kworker/u32:0:241092 is asking for con->mutex to send_request.
> > 3. The con->mutex is held by kworker/7:4:308292 which is currently calling do_sendmsg()
> >
> > Because of memory shortage, step 3 has to reclaim memory and wait for step 2 to writeback
> > And free some folios. But step 2 is blocked by con->mutex which is already held by step3.
> >
>
> I am going to take a deeper look into the issue today. But, probably, one day
> could be not enough. :)
>
>
As far as I can see, then main problem is the memory pressure issue. And it
looks like that this patch can fix the issue:
From 84572f7fcb7aad422c71c87b52dfea805eeb7fbd Mon Sep 17 00:00:00 2001
From: Hristo Venev <hristo@venev.name>
Date: Mon, 6 Apr 2026 16:01:20 +0300
Subject: [PATCH] ceph: put folios not suitable for writeback
The batch holds references to the folios (see `filemap_get_folios`,
`folio_batch_release`), so we need to `folio_put` the folios we remove.
---
fs/ceph/addr.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index 390f122feeaa..10110af74715 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -1329,6 +1329,7 @@ int ceph_process_folio_batch(struct address_space
*mapping,
if (rc == -ENODATA) {
rc = 0;
folio_unlock(folio);
+ folio_put(folio);
ceph_wbc->fbatch.folios[i] = NULL;
continue;
} else if (rc == -E2BIG) {
@@ -1340,6 +1341,7 @@ int ceph_process_folio_batch(struct address_space
*mapping,
if (!folio_clear_dirty_for_io(folio)) {
doutc(cl, "%p !folio_clear_dirty_for_io\n", folio);
folio_unlock(folio);
+ folio_put(folio);
ceph_wbc->fbatch.folios[i] = NULL;
continue;
}
--
2.53.0
I am going to run the xfstests without this patch again to check that issue will
be reproducible again.
Thanks,
Slava.
^ permalink raw reply related [flat|nested] 9+ messages in thread
end of thread, other threads:[~2026-04-29 17:40 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-18 4:12 [PATCH] Revert "ceph: when filling trace, call ceph_get_inode outside of mutexes" Li Lei
2026-04-23 19:50 ` Viacheslav Dubeyko
2026-04-24 18:11 ` Viacheslav Dubeyko
2026-04-24 18:54 ` 李磊
2026-04-27 19:55 ` Viacheslav Dubeyko
2026-04-28 5:31 ` 李磊
2026-04-28 17:04 ` Viacheslav Dubeyko
2026-04-29 17:39 ` Viacheslav Dubeyko
2026-04-27 19:57 ` Viacheslav Dubeyko
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox