public inbox for stable@vger.kernel.org
 help / color / mirror / Atom feed
* [REGRESSION] [PATCH v2] ceph: fix num_ops OBOE when crypto allocation fails
@ 2026-03-18  2:37 Sam Edwards
  2026-03-18 19:41 ` Viacheslav Dubeyko
  0 siblings, 1 reply; 5+ messages in thread
From: Sam Edwards @ 2026-03-18  2:37 UTC (permalink / raw)
  To: Ilya Dryomov, Alex Markuze, Viacheslav Dubeyko
  Cc: Milind Changire, Xiubo Li, Jeff Layton, ceph-devel, linux-kernel,
	regressions, Sam Edwards, stable

move_dirty_folio_in_page_array() may fail if the file is encrypted, the
dirty folio is not the first in the batch, and it fails to allocate a
bounce buffer to hold the ciphertext. When that happens,
ceph_process_folio_batch() simply redirties the folio and flushes the
current batch -- it can retry that folio in a future batch.

However, if this failed folio is not contiguous with the last folio that
did make it into the batch, then ceph_process_folio_batch() has already
incremented `ceph_wbc->num_ops`; because it doesn't follow through and
add the discontiguous folio to the array, ceph_submit_write() -- which
expects that `ceph_wbc->num_ops` accurately reflects the number of
contiguous ranges (and therefore the required number of "write extent"
ops) in the writeback -- will panic the kernel:

    BUG_ON(ceph_wbc->op_idx + 1 != req->r_num_ops);

This issue can be reproduced on affected kernels by writing to
fscrypt-enabled CephFS file(s) with a 4KiB-written/4KiB-skipped/repeat
pattern (total filesize should not matter) and gradually increasing the
system's memory pressure until a bounce buffer allocation fails.

Fix this crash by decrementing `ceph_wbc->num_ops` back to the correct
value when move_dirty_folio_in_page_array() fails, but the folio already
started counting a new (i.e. still-empty) extent.

The defect corrected by this patch has existed since 2022 (see first
`Fixes:`), but another bug blocked multi-folio encrypted writeback until
recently (see second `Fixes:`). The second commit made it into 6.18.16,
6.19.6, and 7.0-rc1, unmasking the panic in those versions. This patch
therefore fixes a regression (panic) introduced by cac190c7674f.

Cc: stable@vger.kernel.org # v6.18+
Fixes: d55207717ded ("ceph: add encryption support to writepage and writepages")
Fixes: cac190c7674f ("ceph: fix write storm on fscrypted files")
Signed-off-by: Sam Edwards <CFSworks@gmail.com>
---

Changes v1->v2:
- Added a paragraph to the commit log briefly explaining the I/O pattern to
  reproduce the issue (thanks Slava)

- Additionally Cc'd regressions@lists.linux.dev as required when handling
  regressions

Feedback not addressed:
- "Commit message should link to the mentioned BUG_ON line in a source listing"
    (link would not really help anyone, and the line is a moving target anyway)

- "Commit message should indicate that ceph_wbc->num_ops is passed to
   ceph_osdc_new_request() to explain why ceph_wbc->num_ops == req->r_num_ops"
    (ceph_wbc->num_ops is easy enough to search; and the cause->effect of the
     BUG_ON() is secondary to the central point that ceph_process_folio_batch()
     is responsible for ensuring ceph_wbc->num_ops is correct before returning)

- "An issue should be filed in the Ceph Redmine, linked via Closes:"
    (thanks Ilya for clarifying this is unnecessary)

---
 fs/ceph/addr.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index e87b3bb94ee8..f366e159ffa6 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -1366,6 +1366,10 @@ void ceph_process_folio_batch(struct address_space *mapping,
 		rc = move_dirty_folio_in_page_array(mapping, wbc, ceph_wbc,
 				folio);
 		if (rc) {
+			/* Did we just begin a new contiguous op? Nevermind! */
+			if (ceph_wbc->len == 0)
+				ceph_wbc->num_ops--;
+
 			folio_redirty_for_writepage(wbc, folio);
 			folio_unlock(folio);
 			break;
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2026-03-25 11:55 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-18  2:37 [REGRESSION] [PATCH v2] ceph: fix num_ops OBOE when crypto allocation fails Sam Edwards
2026-03-18 19:41 ` Viacheslav Dubeyko
2026-03-19 19:14   ` Viacheslav Dubeyko
2026-03-25  2:56   ` Sam Edwards
2026-03-25 11:55     ` Ilya Dryomov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox