* [PATCH v2 0/6] ceph: CephFS writeback correctness and performance fixes
@ 2026-01-07 21:01 Sam Edwards
2026-01-07 21:01 ` [PATCH v2 1/6] ceph: Do not propagate page array emplacement errors as batch errors Sam Edwards
` (5 more replies)
0 siblings, 6 replies; 12+ messages in thread
From: Sam Edwards @ 2026-01-07 21:01 UTC (permalink / raw)
To: Xiubo Li, Ilya Dryomov
Cc: Viacheslav Dubeyko, Christian Brauner, Milind Changire,
Jeff Layton, ceph-devel, linux-kernel, Sam Edwards
Hello list,
This is v2 of my series that addresses interrelated issues in CephFS writeback,
fixing crashes, improving robustness, and correcting performance behavior,
particularly for fscrypted files. [1]
Changes v1->v2:
- Clarify patch #1's commit message to establish that failures on the first
folio are not possible.
- Add another patch to move the "clean up page array on abort" logic to a new
ceph_discard_page_array() function. (Thanks Slava!)
- Change the wording "grossly degraded performance" to instead read
"correspondingly degraded performance." This makes the causal relationship
clearer (that write throughput is limited much more significantly by write
op/s due to the bug) without making any claims (qualitative or otherwise)
about significance. (Thanks Slava!)
- Reset locked_pages = 0 immediately when the page array is discarded,
simplifying patch #5 ("ceph: Assert writeback loop invariants")
- Reword "as evidenced by the previous two patches which fix oopses" to
"as evidenced by two recent patches which fix oopses" and refer to the
patches by subject (being in the same series, I cannot refer to them by hash)
I received several items of feedback on v1 that I have chosen not to adopt --
mostly because I believe they run contrary to kernel norms about strong
contracts, redundancy, not masking bugs, and regressions. (It is possible that
I am mistaken on these norms, and may still include them in a v3 if someone
makes good points in favor of them or consensus overrules me.)
Feedback on v1 not adopted:
- "Patch #1, which fixes a crash in unreachable code, should be reordered after
patch #6 (#5 in v1), which fixes a bug that makes the code unreachable,
in order to simplify crash reproduction in review"
The order of the patchset represents the canonical commit order of the series.
Committing patch #6 before patch #1 would therefore introduce a regression, in
direct violation of longstanding kernel policy.
- "Patch #1 should not swallow errors from move_dirty_folio_in_page_array() if
they happen on the first folio."
It is not possible for move_dirty_folio_in_page_array() to encounter an error
on the first folio, and this is not likely to change in the future. Even if
such an error were to occur, the caller already expects
ceph_process_folio_batch() to "successfully" lock zero pages from time to time.
Swallowing errors in ceph_process_folio_batch() is consistent with its design.
- "Patch #1 should include the call trace and reproduction steps in the commit
message."
The commit message already explains the execution path to the failure, which is
what people really care about (the call trace is just a means to that end).
Inlining makes the call trace completely opaque about why the oops happens.
Reproduction steps are not particularly valuable for posterity, but a curious
party in the future can always refer back to mailing list discussions to find
them.
- "Patch #2 should not exist: it removes the return code from
ceph_process_folio_batch(), which is essential to indicate failures to the
caller."
The caller of ceph_process_folio_batch() only cares about success/failure as a
boolean: the exact error code is discarded. This is redundant with
ceph_wbc.locked_pages, which not only indicates success/failure, but also the
degree of success (how many pages were locked). This makes
ceph_wbc.locked_pages a more appealing single source of truth than the error
code.
Further, the error return mechanism is misleading to future programmers: it
implies that it is acceptable for ceph_process_folio_batch() to abort the
operation (after already selecting some folios for write). The caller cannot
handle this. Removing the return code altogether makes the contract explicit,
which is the central point of the patch.
- "Patch #5 (#4 in v1) should not introduce BUG_ON() in the writeback path."
The writeback path already has BUG_ON(), and this is consistent with kernel
norms: check invariants, fail fast, and don't try to tolerate ambiguity. There
are already BUG_ONs that check several invariants, so rather than introducing
new failure possibilities to the writeback loop, patch #5 actually catches
invariant errors sooner. This is to tighten up the code and prevent
regressions, not fix any particular bug.
- "Patch #6 (#5 in v1) should include benchmark results to support its claim of improved performance."
My environment is not very representative of a typical Ceph deployment, and
benchmarking is tough to get right. I am not prepared to stand behind any
particular estimated/expected speedup factor. Rather, the rationale for this
patch is a simple computer science principle: increasing the amount of useful
work done per operation reduces total overhead. I have changed the phrasing
"grossly degraded performance" to "correspondingly degraded performance" to
emphasize that the performance degradation follows from the bottleneck, without
implying that I'm making some kind of claim about magnitude.
Warm regards,
Sam
[1]: https://lore.kernel.org/ceph-devel/20251231024316.4643-1-CFSworks@gmail.com/T/
Sam Edwards (6):
ceph: Do not propagate page array emplacement errors as batch errors
ceph: Remove error return from ceph_process_folio_batch()
ceph: Free page array when ceph_submit_write fails
ceph: Split out page-array discarding to a function
ceph: Assert writeback loop invariants
ceph: Fix write storm on fscrypted files
fs/ceph/addr.c | 82 +++++++++++++++++++++++++++++---------------------
1 file changed, 48 insertions(+), 34 deletions(-)
--
2.51.2
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v2 1/6] ceph: Do not propagate page array emplacement errors as batch errors
2026-01-07 21:01 [PATCH v2 0/6] ceph: CephFS writeback correctness and performance fixes Sam Edwards
@ 2026-01-07 21:01 ` Sam Edwards
2026-01-08 20:05 ` Viacheslav Dubeyko
2026-01-07 21:01 ` [PATCH v2 2/6] ceph: Remove error return from ceph_process_folio_batch() Sam Edwards
` (4 subsequent siblings)
5 siblings, 1 reply; 12+ messages in thread
From: Sam Edwards @ 2026-01-07 21:01 UTC (permalink / raw)
To: Xiubo Li, Ilya Dryomov
Cc: Viacheslav Dubeyko, Christian Brauner, Milind Changire,
Jeff Layton, ceph-devel, linux-kernel, Sam Edwards, stable
When fscrypt is enabled, move_dirty_folio_in_page_array() may fail
because it needs to allocate bounce buffers to store the encrypted
versions of each folio. Each folio beyond the first allocates its bounce
buffer with GFP_NOWAIT. Failures are common (and expected) under this
allocation mode; they should flush (not abort) the batch.
However, ceph_process_folio_batch() uses the same `rc` variable for its
own return code and for capturing the return codes of its routine calls;
failing to reset `rc` back to 0 results in the error being propagated
out to the main writeback loop, which cannot actually tolerate any
errors here: once `ceph_wbc.pages` is allocated, it must be passed to
ceph_submit_write() to be freed. If it survives until the next iteration
(e.g. due to the goto being followed), ceph_allocate_page_array()'s
BUG_ON() will oops the worker. (Subsequent patches in this series make
the loop more robust.)
Note that this failure mode is currently masked due to another bug
(addressed later in this series) that prevents multiple encrypted folios
from being selected for the same write.
For now, just reset `rc` when redirtying the folio to prevent errors in
move_dirty_folio_in_page_array() from propagating. (Note that
move_dirty_folio_in_page_array() is careful never to return errors on
the first folio, so there is no need to check for that.) After this
change, ceph_process_folio_batch() no longer returns errors; its only
remaining failure indicator is `locked_pages == 0`, which the caller
already handles correctly. The next patch in this series addresses this.
Fixes: ce80b76dd327 ("ceph: introduce ceph_process_folio_batch() method")
Cc: stable@vger.kernel.org
Signed-off-by: Sam Edwards <CFSworks@gmail.com>
---
fs/ceph/addr.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index 63b75d214210..3462df35d245 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -1369,6 +1369,7 @@ int ceph_process_folio_batch(struct address_space *mapping,
rc = move_dirty_folio_in_page_array(mapping, wbc, ceph_wbc,
folio);
if (rc) {
+ rc = 0;
folio_redirty_for_writepage(wbc, folio);
folio_unlock(folio);
break;
--
2.51.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v2 2/6] ceph: Remove error return from ceph_process_folio_batch()
2026-01-07 21:01 [PATCH v2 0/6] ceph: CephFS writeback correctness and performance fixes Sam Edwards
2026-01-07 21:01 ` [PATCH v2 1/6] ceph: Do not propagate page array emplacement errors as batch errors Sam Edwards
@ 2026-01-07 21:01 ` Sam Edwards
2026-01-08 20:08 ` Viacheslav Dubeyko
2026-01-07 21:01 ` [PATCH v2 3/6] ceph: Free page array when ceph_submit_write fails Sam Edwards
` (3 subsequent siblings)
5 siblings, 1 reply; 12+ messages in thread
From: Sam Edwards @ 2026-01-07 21:01 UTC (permalink / raw)
To: Xiubo Li, Ilya Dryomov
Cc: Viacheslav Dubeyko, Christian Brauner, Milind Changire,
Jeff Layton, ceph-devel, linux-kernel, Sam Edwards
Following the previous patch, ceph_process_folio_batch() no longer
returns errors because the writeback loop cannot handle them.
Since this function already indicates failure to lock any pages by
leaving `ceph_wbc.locked_pages == 0`, and the writeback loop has no way
to handle abandonment of a locked batch, change the return type of
ceph_process_folio_batch() to `void` and remove the pathological goto in
the writeback loop. The lack of a return code emphasizes that
ceph_process_folio_batch() is designed to be abort-free: that is, once
it commits a folio for writeback, it will not later abandon it or
propagate an error for that folio.
Signed-off-by: Sam Edwards <CFSworks@gmail.com>
---
fs/ceph/addr.c | 17 +++++------------
1 file changed, 5 insertions(+), 12 deletions(-)
diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index 3462df35d245..2b722916fb9b 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -1283,16 +1283,16 @@ static inline int move_dirty_folio_in_page_array(struct address_space *mapping,
}
static
-int ceph_process_folio_batch(struct address_space *mapping,
- struct writeback_control *wbc,
- struct ceph_writeback_ctl *ceph_wbc)
+void ceph_process_folio_batch(struct address_space *mapping,
+ struct writeback_control *wbc,
+ struct ceph_writeback_ctl *ceph_wbc)
{
struct inode *inode = mapping->host;
struct ceph_fs_client *fsc = ceph_inode_to_fs_client(inode);
struct ceph_client *cl = fsc->client;
struct folio *folio = NULL;
unsigned i;
- int rc = 0;
+ int rc;
for (i = 0; can_next_page_be_processed(ceph_wbc, i); i++) {
folio = ceph_wbc->fbatch.folios[i];
@@ -1322,12 +1322,10 @@ int ceph_process_folio_batch(struct address_space *mapping,
rc = ceph_check_page_before_write(mapping, wbc,
ceph_wbc, folio);
if (rc == -ENODATA) {
- rc = 0;
folio_unlock(folio);
ceph_wbc->fbatch.folios[i] = NULL;
continue;
} else if (rc == -E2BIG) {
- rc = 0;
folio_unlock(folio);
ceph_wbc->fbatch.folios[i] = NULL;
break;
@@ -1369,7 +1367,6 @@ int ceph_process_folio_batch(struct address_space *mapping,
rc = move_dirty_folio_in_page_array(mapping, wbc, ceph_wbc,
folio);
if (rc) {
- rc = 0;
folio_redirty_for_writepage(wbc, folio);
folio_unlock(folio);
break;
@@ -1380,8 +1377,6 @@ int ceph_process_folio_batch(struct address_space *mapping,
}
ceph_wbc->processed_in_fbatch = i;
-
- return rc;
}
static inline
@@ -1685,10 +1680,8 @@ static int ceph_writepages_start(struct address_space *mapping,
break;
process_folio_batch:
- rc = ceph_process_folio_batch(mapping, wbc, &ceph_wbc);
+ ceph_process_folio_batch(mapping, wbc, &ceph_wbc);
ceph_shift_unused_folios_left(&ceph_wbc.fbatch);
- if (rc)
- goto release_folios;
/* did we get anything? */
if (!ceph_wbc.locked_pages)
--
2.51.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v2 3/6] ceph: Free page array when ceph_submit_write fails
2026-01-07 21:01 [PATCH v2 0/6] ceph: CephFS writeback correctness and performance fixes Sam Edwards
2026-01-07 21:01 ` [PATCH v2 1/6] ceph: Do not propagate page array emplacement errors as batch errors Sam Edwards
2026-01-07 21:01 ` [PATCH v2 2/6] ceph: Remove error return from ceph_process_folio_batch() Sam Edwards
@ 2026-01-07 21:01 ` Sam Edwards
2026-01-07 21:01 ` [PATCH v2 4/6] ceph: Split out page-array discarding to a function Sam Edwards
` (2 subsequent siblings)
5 siblings, 0 replies; 12+ messages in thread
From: Sam Edwards @ 2026-01-07 21:01 UTC (permalink / raw)
To: Xiubo Li, Ilya Dryomov
Cc: Viacheslav Dubeyko, Christian Brauner, Milind Changire,
Jeff Layton, ceph-devel, linux-kernel, Sam Edwards, stable
If `locked_pages` is zero, the page array must not be allocated:
ceph_process_folio_batch() uses `locked_pages` to decide when to
allocate `pages`, and redundant allocations trigger
ceph_allocate_page_array()'s BUG_ON(), resulting in a worker oops (and
writeback stall) or even a kernel panic. Consequently, the main loop in
ceph_writepages_start() assumes that the lifetime of `pages` is confined
to a single iteration.
The ceph_submit_write() function claims ownership of the page array on
success. But failures only redirty/unlock the pages and fail to free the
array, making the failure case in ceph_submit_write() fatal.
Free the page array (and reset locked_pages) in ceph_submit_write()'s
error-handling 'if' block so that the caller's invariant (that the array
does not outlive the iteration) is maintained unconditionally, making
failures in ceph_submit_write() recoverable as originally intended.
Fixes: 1551ec61dc55 ("ceph: introduce ceph_submit_write() method")
Cc: stable@vger.kernel.org
Signed-off-by: Sam Edwards <CFSworks@gmail.com>
---
fs/ceph/addr.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index 2b722916fb9b..467aa7242b49 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -1466,6 +1466,14 @@ int ceph_submit_write(struct address_space *mapping,
unlock_page(page);
}
+ if (ceph_wbc->from_pool) {
+ mempool_free(ceph_wbc->pages, ceph_wb_pagevec_pool);
+ ceph_wbc->from_pool = false;
+ } else
+ kfree(ceph_wbc->pages);
+ ceph_wbc->pages = NULL;
+ ceph_wbc->locked_pages = 0;
+
ceph_osdc_put_request(req);
return -EIO;
}
--
2.51.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v2 4/6] ceph: Split out page-array discarding to a function
2026-01-07 21:01 [PATCH v2 0/6] ceph: CephFS writeback correctness and performance fixes Sam Edwards
` (2 preceding siblings ...)
2026-01-07 21:01 ` [PATCH v2 3/6] ceph: Free page array when ceph_submit_write fails Sam Edwards
@ 2026-01-07 21:01 ` Sam Edwards
2026-01-08 20:11 ` Viacheslav Dubeyko
2026-01-07 21:01 ` [PATCH v2 5/6] ceph: Assert writeback loop invariants Sam Edwards
2026-01-07 21:01 ` [PATCH v2 6/6] ceph: Fix write storm on fscrypted files Sam Edwards
5 siblings, 1 reply; 12+ messages in thread
From: Sam Edwards @ 2026-01-07 21:01 UTC (permalink / raw)
To: Xiubo Li, Ilya Dryomov
Cc: Viacheslav Dubeyko, Christian Brauner, Milind Changire,
Jeff Layton, ceph-devel, linux-kernel, Sam Edwards
Discarding a page array (i.e. after failure to submit it) is a little
complex:
- Every folio in the batch needs to be redirtied and unlocked.
- Some folios are bounce pages created for fscrypt; the underlying
plaintext folios also need to be redirtied and unlocked.
- The array itself can come either from the mempool or general kalloc,
so different free functions need to be used depending on which.
Although currently only ceph_submit_write() does this, this logic is
complex enough to warrant its own function. Move it to a new
ceph_discard_page_array() function that is called by ceph_submit_write()
instead.
Suggested-by: Viacheslav Dubeyko <Slava.Dubeyko@ibm.com>
Signed-off-by: Sam Edwards <CFSworks@gmail.com>
---
fs/ceph/addr.c | 67 ++++++++++++++++++++++++++++----------------------
1 file changed, 38 insertions(+), 29 deletions(-)
diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index 467aa7242b49..3becb13a09fe 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -1222,6 +1222,43 @@ void ceph_allocate_page_array(struct address_space *mapping,
ceph_wbc->len = 0;
}
+static inline
+void ceph_discard_page_array(struct writeback_control *wbc,
+ struct ceph_writeback_ctl *ceph_wbc)
+{
+ int i;
+ struct page *page;
+
+ for (i = 0; i < folio_batch_count(&ceph_wbc->fbatch); i++) {
+ struct folio *folio = ceph_wbc->fbatch.folios[i];
+
+ if (!folio)
+ continue;
+
+ page = &folio->page;
+ redirty_page_for_writepage(wbc, page);
+ unlock_page(page);
+ }
+
+ for (i = 0; i < ceph_wbc->locked_pages; i++) {
+ page = ceph_fscrypt_pagecache_page(ceph_wbc->pages[i]);
+
+ if (!page)
+ continue;
+
+ redirty_page_for_writepage(wbc, page);
+ unlock_page(page);
+ }
+
+ if (ceph_wbc->from_pool) {
+ mempool_free(ceph_wbc->pages, ceph_wb_pagevec_pool);
+ ceph_wbc->from_pool = false;
+ } else
+ kfree(ceph_wbc->pages);
+ ceph_wbc->pages = NULL;
+ ceph_wbc->locked_pages = 0;
+}
+
static inline
bool is_folio_index_contiguous(const struct ceph_writeback_ctl *ceph_wbc,
const struct folio *folio)
@@ -1445,35 +1482,7 @@ int ceph_submit_write(struct address_space *mapping,
BUG_ON(len < ceph_fscrypt_page_offset(page) + thp_size(page) - offset);
if (!ceph_inc_osd_stopping_blocker(fsc->mdsc)) {
- for (i = 0; i < folio_batch_count(&ceph_wbc->fbatch); i++) {
- struct folio *folio = ceph_wbc->fbatch.folios[i];
-
- if (!folio)
- continue;
-
- page = &folio->page;
- redirty_page_for_writepage(wbc, page);
- unlock_page(page);
- }
-
- for (i = 0; i < ceph_wbc->locked_pages; i++) {
- page = ceph_fscrypt_pagecache_page(ceph_wbc->pages[i]);
-
- if (!page)
- continue;
-
- redirty_page_for_writepage(wbc, page);
- unlock_page(page);
- }
-
- if (ceph_wbc->from_pool) {
- mempool_free(ceph_wbc->pages, ceph_wb_pagevec_pool);
- ceph_wbc->from_pool = false;
- } else
- kfree(ceph_wbc->pages);
- ceph_wbc->pages = NULL;
- ceph_wbc->locked_pages = 0;
-
+ ceph_discard_page_array(wbc, ceph_wbc);
ceph_osdc_put_request(req);
return -EIO;
}
--
2.51.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v2 5/6] ceph: Assert writeback loop invariants
2026-01-07 21:01 [PATCH v2 0/6] ceph: CephFS writeback correctness and performance fixes Sam Edwards
` (3 preceding siblings ...)
2026-01-07 21:01 ` [PATCH v2 4/6] ceph: Split out page-array discarding to a function Sam Edwards
@ 2026-01-07 21:01 ` Sam Edwards
2026-01-08 20:12 ` Viacheslav Dubeyko
2026-01-07 21:01 ` [PATCH v2 6/6] ceph: Fix write storm on fscrypted files Sam Edwards
5 siblings, 1 reply; 12+ messages in thread
From: Sam Edwards @ 2026-01-07 21:01 UTC (permalink / raw)
To: Xiubo Li, Ilya Dryomov
Cc: Viacheslav Dubeyko, Christian Brauner, Milind Changire,
Jeff Layton, ceph-devel, linux-kernel, Sam Edwards
If `locked_pages` is zero, the page array must not be allocated:
ceph_process_folio_batch() uses `locked_pages` to decide when to
allocate `pages`, and redundant allocations trigger
ceph_allocate_page_array()'s BUG_ON(), resulting in a worker oops (and
writeback stall) or even a kernel panic. Consequently, the main loop in
ceph_writepages_start() assumes that the lifetime of `pages` is confined
to a single iteration.
This expectation is currently not clear enough, as evidenced by two
recent patches which fix oopses caused by `pages` persisting into
the next loop iteration:
- "ceph: Do not propagate page array emplacement errors as batch errors"
- "ceph: Free page array when ceph_submit_write fails"
Use an explicit BUG_ON() at the top of the loop to assert the loop's
preexisting expectation that `pages` is cleaned up by the previous
iteration. Because this is closely tied to `locked_pages`, also make it
the previous iteration's responsibility to guarantee its reset, and
verify with a second new BUG_ON() instead of handling (and masking)
failures to do so.
Signed-off-by: Sam Edwards <CFSworks@gmail.com>
---
fs/ceph/addr.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index 3becb13a09fe..f2db05b51a3b 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -1679,7 +1679,9 @@ static int ceph_writepages_start(struct address_space *mapping,
tag_pages_for_writeback(mapping, ceph_wbc.index, ceph_wbc.end);
while (!has_writeback_done(&ceph_wbc)) {
- ceph_wbc.locked_pages = 0;
+ BUG_ON(ceph_wbc.locked_pages);
+ BUG_ON(ceph_wbc.pages);
+
ceph_wbc.max_pages = ceph_wbc.wsize >> PAGE_SHIFT;
get_more_pages:
--
2.51.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v2 6/6] ceph: Fix write storm on fscrypted files
2026-01-07 21:01 [PATCH v2 0/6] ceph: CephFS writeback correctness and performance fixes Sam Edwards
` (4 preceding siblings ...)
2026-01-07 21:01 ` [PATCH v2 5/6] ceph: Assert writeback loop invariants Sam Edwards
@ 2026-01-07 21:01 ` Sam Edwards
5 siblings, 0 replies; 12+ messages in thread
From: Sam Edwards @ 2026-01-07 21:01 UTC (permalink / raw)
To: Xiubo Li, Ilya Dryomov
Cc: Viacheslav Dubeyko, Christian Brauner, Milind Changire,
Jeff Layton, ceph-devel, linux-kernel, Sam Edwards, stable
CephFS stores file data across multiple RADOS objects. An object is the
atomic unit of storage, so the writeback code must clean only folios
that belong to the same object with each OSD request.
CephFS also supports RAID0-style striping of file contents: if enabled,
each object stores multiple unbroken "stripe units" covering different
portions of the file; if disabled, a "stripe unit" is simply the whole
object. The stripe unit is (usually) reported as the inode's block size.
Though the writeback logic could, in principle, lock all dirty folios
belonging to the same object, its current design is to lock only a
single stripe unit at a time. Ever since this code was first written,
it has determined this size by checking the inode's block size.
However, the relatively-new fscrypt support needed to reduce the block
size for encrypted inodes to the crypto block size (see 'fixes' commit),
which causes an unnecessarily high number of write operations (~1024x as
many, with 4MiB objects) and correspondingly degraded performance.
Fix this (and clarify intent) by using i_layout.stripe_unit directly in
ceph_define_write_size() so that encrypted inodes are written back with
the same number of operations as if they were unencrypted.
Fixes: 94af0470924c ("ceph: add some fscrypt guardrails")
Cc: stable@vger.kernel.org
Signed-off-by: Sam Edwards <CFSworks@gmail.com>
---
fs/ceph/addr.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index f2db05b51a3b..b97a6120d4b9 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -1000,7 +1000,8 @@ unsigned int ceph_define_write_size(struct address_space *mapping)
{
struct inode *inode = mapping->host;
struct ceph_fs_client *fsc = ceph_inode_to_fs_client(inode);
- unsigned int wsize = i_blocksize(inode);
+ struct ceph_inode_info *ci = ceph_inode(inode);
+ unsigned int wsize = ci->i_layout.stripe_unit;
if (fsc->mount_options->wsize < wsize)
wsize = fsc->mount_options->wsize;
--
2.51.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH v2 1/6] ceph: Do not propagate page array emplacement errors as batch errors
2026-01-07 21:01 ` [PATCH v2 1/6] ceph: Do not propagate page array emplacement errors as batch errors Sam Edwards
@ 2026-01-08 20:05 ` Viacheslav Dubeyko
0 siblings, 0 replies; 12+ messages in thread
From: Viacheslav Dubeyko @ 2026-01-08 20:05 UTC (permalink / raw)
To: Xiubo Li, idryomov@gmail.com, cfsworks@gmail.com
Cc: Milind Changire, stable@vger.kernel.org,
ceph-devel@vger.kernel.org, brauner@kernel.org,
jlayton@kernel.org, linux-kernel@vger.kernel.org
On Wed, 2026-01-07 at 13:01 -0800, Sam Edwards wrote:
> When fscrypt is enabled, move_dirty_folio_in_page_array() may fail
> because it needs to allocate bounce buffers to store the encrypted
> versions of each folio. Each folio beyond the first allocates its bounce
> buffer with GFP_NOWAIT. Failures are common (and expected) under this
> allocation mode; they should flush (not abort) the batch.
>
> However, ceph_process_folio_batch() uses the same `rc` variable for its
> own return code and for capturing the return codes of its routine calls;
> failing to reset `rc` back to 0 results in the error being propagated
> out to the main writeback loop, which cannot actually tolerate any
> errors here: once `ceph_wbc.pages` is allocated, it must be passed to
> ceph_submit_write() to be freed. If it survives until the next iteration
> (e.g. due to the goto being followed), ceph_allocate_page_array()'s
> BUG_ON() will oops the worker. (Subsequent patches in this series make
> the loop more robust.)
>
> Note that this failure mode is currently masked due to another bug
> (addressed later in this series) that prevents multiple encrypted folios
> from being selected for the same write.
>
> For now, just reset `rc` when redirtying the folio to prevent errors in
> move_dirty_folio_in_page_array() from propagating. (Note that
> move_dirty_folio_in_page_array() is careful never to return errors on
> the first folio, so there is no need to check for that.) After this
> change, ceph_process_folio_batch() no longer returns errors; its only
> remaining failure indicator is `locked_pages == 0`, which the caller
> already handles correctly. The next patch in this series addresses this.
>
> Fixes: ce80b76dd327 ("ceph: introduce ceph_process_folio_batch() method")
> Cc: stable@vger.kernel.org
> Signed-off-by: Sam Edwards <CFSworks@gmail.com>
> ---
> fs/ceph/addr.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
> index 63b75d214210..3462df35d245 100644
> --- a/fs/ceph/addr.c
> +++ b/fs/ceph/addr.c
> @@ -1369,6 +1369,7 @@ int ceph_process_folio_batch(struct address_space *mapping,
> rc = move_dirty_folio_in_page_array(mapping, wbc, ceph_wbc,
> folio);
> if (rc) {
> + rc = 0;
> folio_redirty_for_writepage(wbc, folio);
> folio_unlock(folio);
> break;
I've shared my opinion about this patch already. It should be the last one.
Because, another patch fixes the issue that hides this one. It makes sense to
uncover this bug and fix it then. My opinion is still here.
Thanks,
Slava.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2 2/6] ceph: Remove error return from ceph_process_folio_batch()
2026-01-07 21:01 ` [PATCH v2 2/6] ceph: Remove error return from ceph_process_folio_batch() Sam Edwards
@ 2026-01-08 20:08 ` Viacheslav Dubeyko
2026-01-09 0:29 ` Sam Edwards
0 siblings, 1 reply; 12+ messages in thread
From: Viacheslav Dubeyko @ 2026-01-08 20:08 UTC (permalink / raw)
To: Xiubo Li, idryomov@gmail.com, cfsworks@gmail.com
Cc: Milind Changire, ceph-devel@vger.kernel.org, brauner@kernel.org,
jlayton@kernel.org, linux-kernel@vger.kernel.org
On Wed, 2026-01-07 at 13:01 -0800, Sam Edwards wrote:
> Following the previous patch, ceph_process_folio_batch() no longer
> returns errors because the writeback loop cannot handle them.
>
> Since this function already indicates failure to lock any pages by
> leaving `ceph_wbc.locked_pages == 0`, and the writeback loop has no way
> to handle abandonment of a locked batch, change the return type of
> ceph_process_folio_batch() to `void` and remove the pathological goto in
> the writeback loop. The lack of a return code emphasizes that
> ceph_process_folio_batch() is designed to be abort-free: that is, once
> it commits a folio for writeback, it will not later abandon it or
> propagate an error for that folio.
>
> Signed-off-by: Sam Edwards <CFSworks@gmail.com>
> ---
> fs/ceph/addr.c | 17 +++++------------
> 1 file changed, 5 insertions(+), 12 deletions(-)
>
> diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
> index 3462df35d245..2b722916fb9b 100644
> --- a/fs/ceph/addr.c
> +++ b/fs/ceph/addr.c
> @@ -1283,16 +1283,16 @@ static inline int move_dirty_folio_in_page_array(struct address_space *mapping,
> }
>
> static
> -int ceph_process_folio_batch(struct address_space *mapping,
> - struct writeback_control *wbc,
> - struct ceph_writeback_ctl *ceph_wbc)
> +void ceph_process_folio_batch(struct address_space *mapping,
> + struct writeback_control *wbc,
> + struct ceph_writeback_ctl *ceph_wbc)
> {
> struct inode *inode = mapping->host;
> struct ceph_fs_client *fsc = ceph_inode_to_fs_client(inode);
> struct ceph_client *cl = fsc->client;
> struct folio *folio = NULL;
> unsigned i;
> - int rc = 0;
> + int rc;
>
> for (i = 0; can_next_page_be_processed(ceph_wbc, i); i++) {
> folio = ceph_wbc->fbatch.folios[i];
> @@ -1322,12 +1322,10 @@ int ceph_process_folio_batch(struct address_space *mapping,
> rc = ceph_check_page_before_write(mapping, wbc,
> ceph_wbc, folio);
> if (rc == -ENODATA) {
> - rc = 0;
> folio_unlock(folio);
> ceph_wbc->fbatch.folios[i] = NULL;
> continue;
> } else if (rc == -E2BIG) {
> - rc = 0;
> folio_unlock(folio);
> ceph_wbc->fbatch.folios[i] = NULL;
> break;
> @@ -1369,7 +1367,6 @@ int ceph_process_folio_batch(struct address_space *mapping,
> rc = move_dirty_folio_in_page_array(mapping, wbc, ceph_wbc,
> folio);
> if (rc) {
> - rc = 0;
> folio_redirty_for_writepage(wbc, folio);
> folio_unlock(folio);
> break;
> @@ -1380,8 +1377,6 @@ int ceph_process_folio_batch(struct address_space *mapping,
> }
>
> ceph_wbc->processed_in_fbatch = i;
> -
> - return rc;
> }
>
> static inline
> @@ -1685,10 +1680,8 @@ static int ceph_writepages_start(struct address_space *mapping,
> break;
>
> process_folio_batch:
> - rc = ceph_process_folio_batch(mapping, wbc, &ceph_wbc);
> + ceph_process_folio_batch(mapping, wbc, &ceph_wbc);
> ceph_shift_unused_folios_left(&ceph_wbc.fbatch);
> - if (rc)
> - goto release_folios;
>
> /* did we get anything? */
> if (!ceph_wbc.locked_pages)
I don't see the point of removing the returning error code from the function. I
prefer to have it because this method calls other ones with complex
functionality. And I would like still save the path of returning error code from
the function. So, I don't agree with this patch.
Thanks,
Slava.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2 4/6] ceph: Split out page-array discarding to a function
2026-01-07 21:01 ` [PATCH v2 4/6] ceph: Split out page-array discarding to a function Sam Edwards
@ 2026-01-08 20:11 ` Viacheslav Dubeyko
0 siblings, 0 replies; 12+ messages in thread
From: Viacheslav Dubeyko @ 2026-01-08 20:11 UTC (permalink / raw)
To: Xiubo Li, idryomov@gmail.com, cfsworks@gmail.com
Cc: Milind Changire, ceph-devel@vger.kernel.org, brauner@kernel.org,
jlayton@kernel.org, linux-kernel@vger.kernel.org
On Wed, 2026-01-07 at 13:01 -0800, Sam Edwards wrote:
> Discarding a page array (i.e. after failure to submit it) is a little
> complex:
> - Every folio in the batch needs to be redirtied and unlocked.
> - Some folios are bounce pages created for fscrypt; the underlying
> plaintext folios also need to be redirtied and unlocked.
> - The array itself can come either from the mempool or general kalloc,
> so different free functions need to be used depending on which.
>
> Although currently only ceph_submit_write() does this, this logic is
> complex enough to warrant its own function. Move it to a new
> ceph_discard_page_array() function that is called by ceph_submit_write()
> instead.
>
> Suggested-by: Viacheslav Dubeyko <Slava.Dubeyko@ibm.com>
> Signed-off-by: Sam Edwards <CFSworks@gmail.com>
> ---
> fs/ceph/addr.c | 67 ++++++++++++++++++++++++++++----------------------
> 1 file changed, 38 insertions(+), 29 deletions(-)
>
> diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
> index 467aa7242b49..3becb13a09fe 100644
> --- a/fs/ceph/addr.c
> +++ b/fs/ceph/addr.c
> @@ -1222,6 +1222,43 @@ void ceph_allocate_page_array(struct address_space *mapping,
> ceph_wbc->len = 0;
> }
>
> +static inline
> +void ceph_discard_page_array(struct writeback_control *wbc,
> + struct ceph_writeback_ctl *ceph_wbc)
> +{
> + int i;
> + struct page *page;
> +
> + for (i = 0; i < folio_batch_count(&ceph_wbc->fbatch); i++) {
> + struct folio *folio = ceph_wbc->fbatch.folios[i];
> +
> + if (!folio)
> + continue;
> +
> + page = &folio->page;
> + redirty_page_for_writepage(wbc, page);
> + unlock_page(page);
> + }
> +
> + for (i = 0; i < ceph_wbc->locked_pages; i++) {
> + page = ceph_fscrypt_pagecache_page(ceph_wbc->pages[i]);
> +
> + if (!page)
> + continue;
> +
> + redirty_page_for_writepage(wbc, page);
> + unlock_page(page);
> + }
> +
> + if (ceph_wbc->from_pool) {
> + mempool_free(ceph_wbc->pages, ceph_wb_pagevec_pool);
> + ceph_wbc->from_pool = false;
> + } else
> + kfree(ceph_wbc->pages);
> + ceph_wbc->pages = NULL;
> + ceph_wbc->locked_pages = 0;
> +}
> +
> static inline
> bool is_folio_index_contiguous(const struct ceph_writeback_ctl *ceph_wbc,
> const struct folio *folio)
> @@ -1445,35 +1482,7 @@ int ceph_submit_write(struct address_space *mapping,
> BUG_ON(len < ceph_fscrypt_page_offset(page) + thp_size(page) - offset);
>
> if (!ceph_inc_osd_stopping_blocker(fsc->mdsc)) {
> - for (i = 0; i < folio_batch_count(&ceph_wbc->fbatch); i++) {
> - struct folio *folio = ceph_wbc->fbatch.folios[i];
> -
> - if (!folio)
> - continue;
> -
> - page = &folio->page;
> - redirty_page_for_writepage(wbc, page);
> - unlock_page(page);
> - }
> -
> - for (i = 0; i < ceph_wbc->locked_pages; i++) {
> - page = ceph_fscrypt_pagecache_page(ceph_wbc->pages[i]);
> -
> - if (!page)
> - continue;
> -
> - redirty_page_for_writepage(wbc, page);
> - unlock_page(page);
> - }
> -
> - if (ceph_wbc->from_pool) {
> - mempool_free(ceph_wbc->pages, ceph_wb_pagevec_pool);
> - ceph_wbc->from_pool = false;
> - } else
> - kfree(ceph_wbc->pages);
> - ceph_wbc->pages = NULL;
> - ceph_wbc->locked_pages = 0;
> -
> + ceph_discard_page_array(wbc, ceph_wbc);
> ceph_osdc_put_request(req);
> return -EIO;
> }
This patch makes sense to me. Looks good.
Reviewed-by: Viacheslav Dubeyko <Slava.Dubeyko@ibm.com>
Thanks,
Slava.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2 5/6] ceph: Assert writeback loop invariants
2026-01-07 21:01 ` [PATCH v2 5/6] ceph: Assert writeback loop invariants Sam Edwards
@ 2026-01-08 20:12 ` Viacheslav Dubeyko
0 siblings, 0 replies; 12+ messages in thread
From: Viacheslav Dubeyko @ 2026-01-08 20:12 UTC (permalink / raw)
To: Xiubo Li, idryomov@gmail.com, cfsworks@gmail.com
Cc: Milind Changire, ceph-devel@vger.kernel.org, brauner@kernel.org,
jlayton@kernel.org, linux-kernel@vger.kernel.org
On Wed, 2026-01-07 at 13:01 -0800, Sam Edwards wrote:
> If `locked_pages` is zero, the page array must not be allocated:
> ceph_process_folio_batch() uses `locked_pages` to decide when to
> allocate `pages`, and redundant allocations trigger
> ceph_allocate_page_array()'s BUG_ON(), resulting in a worker oops (and
> writeback stall) or even a kernel panic. Consequently, the main loop in
> ceph_writepages_start() assumes that the lifetime of `pages` is confined
> to a single iteration.
>
> This expectation is currently not clear enough, as evidenced by two
> recent patches which fix oopses caused by `pages` persisting into
> the next loop iteration:
> - "ceph: Do not propagate page array emplacement errors as batch errors"
> - "ceph: Free page array when ceph_submit_write fails"
>
> Use an explicit BUG_ON() at the top of the loop to assert the loop's
> preexisting expectation that `pages` is cleaned up by the previous
> iteration. Because this is closely tied to `locked_pages`, also make it
> the previous iteration's responsibility to guarantee its reset, and
> verify with a second new BUG_ON() instead of handling (and masking)
> failures to do so.
>
> Signed-off-by: Sam Edwards <CFSworks@gmail.com>
> ---
> fs/ceph/addr.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
> index 3becb13a09fe..f2db05b51a3b 100644
> --- a/fs/ceph/addr.c
> +++ b/fs/ceph/addr.c
> @@ -1679,7 +1679,9 @@ static int ceph_writepages_start(struct address_space *mapping,
> tag_pages_for_writeback(mapping, ceph_wbc.index, ceph_wbc.end);
>
> while (!has_writeback_done(&ceph_wbc)) {
> - ceph_wbc.locked_pages = 0;
> + BUG_ON(ceph_wbc.locked_pages);
> + BUG_ON(ceph_wbc.pages);
> +
> ceph_wbc.max_pages = ceph_wbc.wsize >> PAGE_SHIFT;
>
> get_more_pages:
I don't agree with using BUG_ON() here.
Thanks,
Slava.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2 2/6] ceph: Remove error return from ceph_process_folio_batch()
2026-01-08 20:08 ` Viacheslav Dubeyko
@ 2026-01-09 0:29 ` Sam Edwards
0 siblings, 0 replies; 12+ messages in thread
From: Sam Edwards @ 2026-01-09 0:29 UTC (permalink / raw)
To: Viacheslav Dubeyko
Cc: Xiubo Li, idryomov@gmail.com, Milind Changire,
ceph-devel@vger.kernel.org, brauner@kernel.org,
jlayton@kernel.org, linux-kernel@vger.kernel.org
On Thu, Jan 8, 2026 at 12:08 PM Viacheslav Dubeyko
<Slava.Dubeyko@ibm.com> wrote:
>
> On Wed, 2026-01-07 at 13:01 -0800, Sam Edwards wrote:
> > Following the previous patch, ceph_process_folio_batch() no longer
> > returns errors because the writeback loop cannot handle them.
> >
> > Since this function already indicates failure to lock any pages by
> > leaving `ceph_wbc.locked_pages == 0`, and the writeback loop has no way
> > to handle abandonment of a locked batch, change the return type of
> > ceph_process_folio_batch() to `void` and remove the pathological goto in
> > the writeback loop. The lack of a return code emphasizes that
> > ceph_process_folio_batch() is designed to be abort-free: that is, once
> > it commits a folio for writeback, it will not later abandon it or
> > propagate an error for that folio.
> >
> > Signed-off-by: Sam Edwards <CFSworks@gmail.com>
> > ---
> > fs/ceph/addr.c | 17 +++++------------
> > 1 file changed, 5 insertions(+), 12 deletions(-)
> >
> > diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
> > index 3462df35d245..2b722916fb9b 100644
> > --- a/fs/ceph/addr.c
> > +++ b/fs/ceph/addr.c
> > @@ -1283,16 +1283,16 @@ static inline int move_dirty_folio_in_page_array(struct address_space *mapping,
> > }
> >
> > static
> > -int ceph_process_folio_batch(struct address_space *mapping,
> > - struct writeback_control *wbc,
> > - struct ceph_writeback_ctl *ceph_wbc)
> > +void ceph_process_folio_batch(struct address_space *mapping,
> > + struct writeback_control *wbc,
> > + struct ceph_writeback_ctl *ceph_wbc)
> > {
> > struct inode *inode = mapping->host;
> > struct ceph_fs_client *fsc = ceph_inode_to_fs_client(inode);
> > struct ceph_client *cl = fsc->client;
> > struct folio *folio = NULL;
> > unsigned i;
> > - int rc = 0;
> > + int rc;
> >
> > for (i = 0; can_next_page_be_processed(ceph_wbc, i); i++) {
> > folio = ceph_wbc->fbatch.folios[i];
> > @@ -1322,12 +1322,10 @@ int ceph_process_folio_batch(struct address_space *mapping,
> > rc = ceph_check_page_before_write(mapping, wbc,
> > ceph_wbc, folio);
> > if (rc == -ENODATA) {
> > - rc = 0;
> > folio_unlock(folio);
> > ceph_wbc->fbatch.folios[i] = NULL;
> > continue;
> > } else if (rc == -E2BIG) {
> > - rc = 0;
> > folio_unlock(folio);
> > ceph_wbc->fbatch.folios[i] = NULL;
> > break;
> > @@ -1369,7 +1367,6 @@ int ceph_process_folio_batch(struct address_space *mapping,
> > rc = move_dirty_folio_in_page_array(mapping, wbc, ceph_wbc,
> > folio);
> > if (rc) {
> > - rc = 0;
> > folio_redirty_for_writepage(wbc, folio);
> > folio_unlock(folio);
> > break;
> > @@ -1380,8 +1377,6 @@ int ceph_process_folio_batch(struct address_space *mapping,
> > }
> >
> > ceph_wbc->processed_in_fbatch = i;
> > -
> > - return rc;
> > }
> >
> > static inline
> > @@ -1685,10 +1680,8 @@ static int ceph_writepages_start(struct address_space *mapping,
> > break;
> >
> > process_folio_batch:
> > - rc = ceph_process_folio_batch(mapping, wbc, &ceph_wbc);
> > + ceph_process_folio_batch(mapping, wbc, &ceph_wbc);
> > ceph_shift_unused_folios_left(&ceph_wbc.fbatch);
> > - if (rc)
> > - goto release_folios;
> >
> > /* did we get anything? */
> > if (!ceph_wbc.locked_pages)
>
> I don't see the point of removing the returning error code from the function. I
> prefer to have it because this method calls other ones with complex
> functionality.
Hey Slava,
I've tried to clarify this patch a few ways now, but I still haven't
seen actionable technical feedback, only misunderstanding or
preference rather than concrete issues. I'm making one last, detailed
attempt to explain the rationale; if it still isn't clear, I believe
our only remaining option is to involve another maintainer to help
complete the review.
As the function ceph_process_folio_batch() exists today (i.e. in
Linus's tree), the API allows for 3 different possible outcomes (names
my own, for illustrative purposes):
- ERROR (rc != 0): The function returns an error code, maybe after
already allocating the page array and locking some pages, maybe not.
It's ambiguous.
- EMPTY (!rc && !locked_pages): The function executes normally, but
does not lock any folios. It may do this if it encounters a problem on
the first folio.
- LOCKED (!rc && locked_pages >= 1): The function executes to
completion without erroring after locking at least one folio. It may
have encountered errors on subsequent folios, but it handles these by
flushing the batch and trying again next time it's called.
The ceph_writepages_start() loop that calls it, today, does not
differentiate between the ERROR and EMPTY result:
if (rc)
goto release_folios;
/* did we get anything? */
if (!ceph_wbc.locked_pages)
goto release_folios;
So right now it's just a redundant degree of freedom. Standard
software engineering principles dictate that a function's API should
not create complexity beyond the actual needs of its caller. So either
we remove EMPTY... or remove ERROR.
The current problem with ERROR is that it (often, not always) results
in a crash: the loop will come around without freeing the page array
and violate the expectations of the next iteration, and we violate the
BUG_ON() in ceph_allocate_page_array(). The loop is, today, not
written in a way that "aborting" a batch is ever appropriate; it
assumes the page array only exists when LOCKED. We must either add
logic to the caller to fix it... or remove ERROR.
The only way that the ERROR result could occur today is if fscrypt is
enabled, the write storm bug is fixed, and a non-first folio fails its
bounce page allocation. That means that it isn't possible now, and it
wouldn't be made possible by applying this patchset. The "no dead
code" rule requires that we either add some logic to
ceph_allocate_page_array() that can actually result in ERROR... or
remove ERROR.
Finally, there is a big advantage to removing the return code from
ceph_process_folio_batch(): it uses C's type system to enforce the
guarantee that it doesn't abandon the page array. (Note! That does not
mean that ceph_process_folio_batch() is not allowed to encounter
failures -- I fully agree that it's a complex function and it may have
more fallible logic added in the future -- just that it is responsible
for cleaning up after those failures. Put simply, the lack of a return
code gives it no way for it to make the cleanup the caller's problem!)
> And I would like still save the path of returning error code from
> the function. So, I don't agree with this patch.
It can always be brought back in the future if something genuinely
needs it (such as a hypothetical "major failure" in
ceph_process_folio_batch() that needs to fail the entire writeback,
not merely retry). But Linux doesn't keep "vestigial" code around for
hypothetical future needs. If there isn't a concrete reason to add or
use it now, then there's no reason to preserve it.
Hope this helps,
Sam
>
> Thanks,
> Slava.
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2026-01-09 0:30 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-07 21:01 [PATCH v2 0/6] ceph: CephFS writeback correctness and performance fixes Sam Edwards
2026-01-07 21:01 ` [PATCH v2 1/6] ceph: Do not propagate page array emplacement errors as batch errors Sam Edwards
2026-01-08 20:05 ` Viacheslav Dubeyko
2026-01-07 21:01 ` [PATCH v2 2/6] ceph: Remove error return from ceph_process_folio_batch() Sam Edwards
2026-01-08 20:08 ` Viacheslav Dubeyko
2026-01-09 0:29 ` Sam Edwards
2026-01-07 21:01 ` [PATCH v2 3/6] ceph: Free page array when ceph_submit_write fails Sam Edwards
2026-01-07 21:01 ` [PATCH v2 4/6] ceph: Split out page-array discarding to a function Sam Edwards
2026-01-08 20:11 ` Viacheslav Dubeyko
2026-01-07 21:01 ` [PATCH v2 5/6] ceph: Assert writeback loop invariants Sam Edwards
2026-01-08 20:12 ` Viacheslav Dubeyko
2026-01-07 21:01 ` [PATCH v2 6/6] ceph: Fix write storm on fscrypted files Sam Edwards
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox