From: Viacheslav Dubeyko <Slava.Dubeyko@ibm.com>
To: "lilei24@kuaishou.com" <lilei24@kuaishou.com>
Cc: "ceph-devel@vger.kernel.org" <ceph-devel@vger.kernel.org>,
"idryomov@gmail.com" <idryomov@gmail.com>,
Alex Markuze <amarkuze@redhat.com>,
"slava@dubeyko.com" <slava@dubeyko.com>,
Xiubo Li <xiubli@redhat.com>,
"noctis.akm@gmail.com" <noctis.akm@gmail.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: RE: Re: [PATCH] ceph: fix potential stray locked folios during umount
Date: Fri, 24 Apr 2026 22:02:10 +0000 [thread overview]
Message-ID: <03996cfe17d52acd2fea09aa807fd267be419f6c.camel@ibm.com> (raw)
In-Reply-To: <FA76A115-EFBB-4702-9459-45ABE8670E34@kuaishou.com>
On Fri, 2026-04-24 at 19:44 +0000, 李磊 wrote:
>
> > 2026年4月24日 03:30,Viacheslav Dubeyko <Slava.Dubeyko@ibm.com> 写道:
> >
> > 安全提示:此邮件来自公司外部。除非您确认发件人身份可信且邮件内容不含可疑信息,否则请勿回复或转发邮件、点击邮件链接或打开附件。
> >
> >
> > On Sat, 2026-04-18 at 21:39 +0800, Li Lei wrote:
> > > During umount, we only wait for stopping_blockers to drop to zero for
> > > a certain time specified by mount_timeout, and continue the rest of
> > > the procedure even if there are inflight requests. This behavior may
> > > leave some folios locked even after the cephfs umounted, which causes
> > > other kernel threads to hung.
> > >
> > > Buffered read process calls filemap_update_page() and waits on
> > > folio_put_wait_locked() with TASK_KILLABLE flag set, which means this
> > > process could be killed and the filesystem could be umount successfully
> > > (no file opened in it). Umount calls truncate_inode_pages() and waits
> > > on locked pages for those inodes whose i_count == 0. In these way,
> > > there would be no locked folios for this filesystem left in system
> > > after umount exits.
> > >
> > > However, things are different for cephfs. Cephfs calls ihold() and
> > > submits osd request for buffered read and gets folio locked. Once the
> > > buffered read process is killed, the inode will be skipped in
> > > evict_inodes(), because its i_count > 0. Forthemore, the folios are
> > > still locked. It can only be unlocked in netfs_unlock_read_folio().
> > >
> > > stopping_blocks should block umount from proceeding, but it only waits
> > > for mount_timeout (default 60s) even if there are still flying request
> > > out there, leaving stray locked folios. Other kthread, like kcompactd
> > > , could be stuck on those locked folioes forever.
> > >
> > > Steps to Reproduce:
> > > 1. echo 3 > /proc/sys/vm/drop_caches.
> > > 2. dd if=cephfs/xxx.img of=/dev/null
> > > Make sure cephfs/xxx.img is big enough to make time for us to do the
> > > following command
> > > 3. execute 'systemctl stop ceph-osd@*' on the osd nodes
> > > It would be great if you have a tiny cluster. Stopping all the osds
> > > would be much easier.
> > > 4. kill -9 `pidof dd`.
> > > Buffered read process must be killed at that moment. But inflight
> > > read requests can be observed in the /sys/kernel/debug/ceph/xxxx/osdc
> > > 5. umount cephfs
> > > Wait for 60s if you mount cephfs by using the default mount option.
> > >
> > > We got the warning:
> > > ceph: [b2c9a006-9ad8-48e9-8257-6fb1e1b91014 66562]: umount timed out, 0
> > > VFS: Busy inodes after unmount of ceph (ceph)
> > >
> > > if check_data_corruption option disable, kcompactd may stuck in the
> > > future. If it is eanbled, we catch the bug immediately.
> > >
> > > [94543.042953] ------------[ cut here ]------------
> > > [94543.049391] kernel BUG at fs/super.c:654!
> > > [94543.054171] Oops: invalid opcode: 0000 [#1] SMP PTI
> > > [94543.059881] CPU: 25 UID: 0 PID: 3451674 Comm: umount Kdump: loaded Tainted: G S OE 7.0.0-dirty #2 PREEMPTLAZY
> > > [94543.072678] Tainted: [S]=CPU_OUT_OF_SPEC, [O]=OOT_MODULE, [E]=UNSIGNED_MODULE
> > > [94543.080918] Hardware name: Dell Inc. PowerEdge R730xd/072T6D, BIOS 2.5.5 08/16/2017
> > > [94543.089755] RIP: 0010:generic_shutdown_super+0x111/0x120
> > > [94543.095982] Code: cc cc e8 c2 1f ef ff 48 8b bb d0 00 00 00 eb db 48 8b 43 28 48 8d b3 98 03 00 00 48 c7 c7 90 09 55 8c 48 8b 10 e8 0f c3 cc ff <0f> 0b 66 66 2e 0f 1f 84 00 00 00 00 00 66 90 90 90 90 90 90 90 90
> > > [94543.117607] RSP: 0018:ffffce35f8c53d40 EFLAGS: 00010246
> > > [94543.123793] RAX: 000000000000002d RBX: ffff8ba94d0d9000 RCX: 0000000000000000
> > > [94543.132125] RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff8bc0df91c600
> > > [94543.140460] RBP: ffffffffc13b52c0 R08: 0000000000000000 R09: ffffce35f8c53be0
> > > [94543.148801] R10: 0000000000000001 R11: 0000000000000001 R12: ffff8ba94af0e000
> > > [94543.157150] R13: ffff8ba94d0d9000 R14: 0000000000000004 R15: ffff8ba946d9a000
> > > [94543.165505] FS: 00007fb1c607c840(0000) GS:ffff8bc1520e4000(0000) knlGS:0000000000000000
> > > [94543.174943] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > > [94543.181769] CR2: 00000000d64e5000 CR3: 000000189c9f2002 CR4: 00000000003726f0
> > > [94543.190160] Call Trace:
> > > [94543.193317] <TASK>
> > > [94543.196088] kill_anon_super+0x12/0x40
> > > [94543.200719] ceph_kill_sb+0xda/0x2c0 [ceph]
> > > [94543.205877] ? radix_tree_delete_item+0x68/0xd0
> > > [94543.211395] deactivate_locked_super+0x31/0xb0
> > > [94543.216815] cleanup_mnt+0xcb/0x110
> > > [94543.221169] task_work_run+0x58/0x80
> > > [94543.225629] exit_to_user_mode_loop+0x13f/0x4d0
> > > [94543.231163] do_syscall_64+0x1ef/0x840
> > > [94543.235827] ? do_syscall_64+0x101/0x840
> > > [94543.240687] ? do_user_addr_fault+0x20e/0x6b0
> > > [94543.246036] entry_SYSCALL_64_after_hwframe+0x76/0x7e
> > > [94543.252166] RIP: 0033:0x7fb1c5f0ccab
> > > [94543.256650] Code: 73 31 0e 00 f7 d8 64 89 01 48 83 c8 ff c3 90 f3 0f 1e fa 31 f6 e9 05 00 00 00 0f 1f 44 00 00 f3 0f 1e fa b8 a6 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 05 c3 0f 1f 40 00 48 8b 15 39 31 0e 00 f7 d8
> > > [94543.278690] RSP: 002b:00007ffe96f80ea8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
> > > [94543.287710] RAX: 0000000000000000 RBX: 00007fb1c61fb264 RCX: 00007fb1c5f0ccab
> > > [94543.296251] RDX: fffffffffffffe88 RSI: 0000000000000000 RDI: 000055ae9dcc6ec0
> > > [94543.304801] RBP: 000055ae9dcc6c90 R08: 0000000000000000 R09: 00007ffe96f7fc50
> > > [94543.313358] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
> > > [94543.321922] R13: 000055ae9dcc6ec0 R14: 000055ae9dcc6da0 R15: 0000000000000000
> > >
> > > So make it wait until all the flying requests returns for clean and safe
> > > umount.
> > >
> > > Fixes: 1464de9f813e ("ceph: wait for OSD requests' callbacks to finish when unmounting")
> > > Signed-off-by: Li Lei <lilei24@kuaishou.com>
> > > ---
> > > fs/ceph/super.c | 11 ++++-------
> > > 1 file changed, 4 insertions(+), 7 deletions(-)
> > >
> > > diff --git a/fs/ceph/super.c b/fs/ceph/super.c
> > > index 2aed6b3..48e63c1 100644
> > > --- a/fs/ceph/super.c
> > > +++ b/fs/ceph/super.c
> > > @@ -1569,13 +1569,10 @@ static void ceph_kill_sb(struct super_block *s)
> > > spin_unlock(&mdsc->stopping_lock);
> > >
> > > if (wait && atomic_read(&mdsc->stopping_blockers)) {
> > > - long timeleft = wait_for_completion_killable_timeout(
> > > - &mdsc->stopping_waiter,
> > > - fsc->client->options->mount_timeout);
> > > - if (!timeleft) /* timed out */
> > > - pr_warn_client(cl, "umount timed out, %ld\n", timeleft);
> > > - else if (timeleft < 0) /* killed */
> > > - pr_warn_client(cl, "umount was killed, %ld\n", timeleft);
> > > + int rc = wait_for_completion_killable(
> > > + &mdsc->stopping_waiter);
> > > + if (rc < 0) /* killed */
> > > + pr_warn_client(cl, "umount was killed\n");
> > > }
> > >
> > > mdsc->stopping = CEPH_MDSC_STOPPING_FLUSHED;
> >
> > I am not completely sure that it's right approach to fix the problem. If I
> > understood correctly, we have situation that somehow OSD is down and dd process
> > has been killed.
> Yes. This this the situation we met in our production environment. We found 1/4
> of our client-nodes had kcompactd task hung because of requesting for a folio_lock.
> After using crash tool to debug, we figured that page belonged to a cephfs file,
> howevere that cephfs had been umounted.
>
> The 'Steps to Reproduce’ is used to emulate the real workload.
>
>
> > It's not normal and usual situation. So, your suggestion is to
> > hung in unmount process forever because we cannot service the flying requests.
> > Is it really good way to fix the problem? Probably, we need to have more smart
> > logic here of waiting flying requests ending. But timeout makes sense to prevent
> > from the situation that something is going wrong and we cannot finish unmount at
> > all. What do you think? Maybe we need to have some loop that checks the state by
> > waking up after some timeout. If the number of flying requests is decreasing,
> > then we should wait. But if nothing is changing with time, then it means that
> > something is wrong and it makes sense to unmount anyway. Because, finally, we
> > will have some call trace in system log, anyway. Does it make sense?
> >
> > Also, we have likewise pattern in the ceph_kill_sb():
> >
> > if (atomic64_read(&mdsc->dirty_folios) > 0) {
> > wait_queue_head_t *wq = &mdsc->flush_end_wq;
> > long timeleft = wait_event_killable_timeout(*wq,
> > atomic64_read(&mdsc->dirty_folios) <=
> > 0,
> > fsc->client->options->mount_timeout);
> > if (!timeleft) /* timed out */
> > pr_warn_client(cl, "umount timed out, %ld\n",
> > timeleft);
> > else if (timeleft < 0) /* killed */
> > pr_warn_client(cl, "umount was killed, %ld\n",
> > timeleft);
> > }
> >
>
> I understand your concern. This patch is a truly straightforward workaround.
> So, how about we just abort OSD requests if they take too long to return
> during unmounting ?
The question here is how to define that OSD requests taking too long time?
Potentially, processing could be really slow for some reason. From one point of
view, if we know that destination OSD is down or we have network partitioning,
then it doesn't make sense to wait to long. I am thinking about potential
checking of number of OSD requests. If this number is going down, then it needs
to wait, otherwise, if this number doesn't change, then it needs to finish the
unmount without waiting. Does it make sense?
>
> Compared to leaving some locked folios in the system, return -EIO to those
> OSD requests which may never return is more reasonable. This is because locked
> folios left behind Cephfs unmount may block kcompactd and render the entire
> system unstable.
>
I agree. It makes sense. If we know that some OSD requests will never return,
then we need to manage this situation in better way. But how could we detect
that OSD request will never return?
> Besides, successful unmounting doesn't guarantee dirty buffers are successfully
> written to the backend. For example, when a buffered write returns, the local
> filesystem may encounter bad blocks on the local disk and -EIO is returned to
> the writeback kworkers. Therefore, in our scenario, does it make sense if we
> treat the OSD requests that have been flight for a certain period as failed,
> And return -EIO to the caller?
This is the main question: how to detect that OSD requests are failed?
As far as I can see, if an OSD is down and osd_request_timeout is not set (the
default), a stalled write can block unmount indefinitely. I assume that you have
the osd_request_timeout is not set. So, maybe, we need to re-consider the policy
of management the stuck OSD requests during unmount.
Laggy OSD path: if any request's r_stamp is older than osd_keepalive_timeout,
the OSD goes on a slow_osds list and ceph_con_keepalive() is called, sending a
keepalive byte over TCP. If the TCP connection is silently broken, the keepalive
write will fail, triggering con_fault().
Timed-out request path: if osd_request_timeout is set (default 0 = disabled),
requests older than that deadline are aborted with -ETIMEDOUT via
abort_request().
Homeless requests: requests that can't be mapped to any OSD are also checked
against osd_request_timeout.
The ceph_con_keepalive_expired() uses the timestamp of the last keepalive
acknowledgement (con->last_keepalive_ack) to determine whether the peer has gone
silent beyond interval. When this fires, the connection is considered dead and
con_fault() is triggered.
So, we need to find a proper approach of finding a good solution from available
functionality.
>
> Lastly, I think we can just use stopping blockers to replace dirty_folios to
> simplify the unmounting and wait process. Accordingly, in ceph_kill_sb(), we
> only need to wait for stopping_blockers count to drop to zero. If a timeout
> occurs, we can cancel all the inflight requests and print some warning messages.
The dirty_folios is "how much dirty data is still to be flushed," while
stopping_blockers is "how many threads are currently inside code that holds an
implicit reference to the MDS client." Unmount must drain both in order, and the
two counters solve entirely different races.
The mdsc->dirty_folios — count of dirty page-cache folios not yet written back.
It incremented in ceph_dirty_folio() at the moment of folio transitions from
clean to dirty in the page cache. It decremented in the OSD write-completion
callback after the OSD acknowledges the writeback and end_page_writeback() is
called. It represents the number of file-data folios that have been dirtied
(modified in the page cache) but whose data has not yet reached an OSD (i.e.,
writeback is pending or in flight).
The mdsc->stopping_blockers counts of in-progress MDS/OSD message handlers. It
incremented by ceph_inc_mds_stopping_blocker() / ceph_inc_osd_stopping_blocker()
at the entry of any async operation that must not be interrupted mid-flight by
shutdown. It decremented at the exit of the async operations' handlers.
I don't think that we can use the stopping blockers only.
Thanks,
Slava.
next prev parent reply other threads:[~2026-04-24 22:02 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-18 13:39 [PATCH] ceph: fix potential stray locked folios during umount Li Lei
2026-04-23 19:30 ` Viacheslav Dubeyko
2026-04-24 19:44 ` 李磊
2026-04-24 22:02 ` Viacheslav Dubeyko [this message]
2026-04-26 15:38 ` 李磊
2026-04-27 21:52 ` Viacheslav Dubeyko
2026-04-29 14:42 ` 李磊
2026-04-29 18:20 ` Viacheslav Dubeyko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=03996cfe17d52acd2fea09aa807fd267be419f6c.camel@ibm.com \
--to=slava.dubeyko@ibm.com \
--cc=amarkuze@redhat.com \
--cc=ceph-devel@vger.kernel.org \
--cc=idryomov@gmail.com \
--cc=lilei24@kuaishou.com \
--cc=linux-kernel@vger.kernel.org \
--cc=noctis.akm@gmail.com \
--cc=slava@dubeyko.com \
--cc=xiubli@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox