public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Sam Edwards <cfsworks@gmail.com>
To: Xiubo Li <xiubli@redhat.com>, Ilya Dryomov <idryomov@gmail.com>
Cc: Viacheslav Dubeyko <Slava.Dubeyko@ibm.com>,
	Christian Brauner <brauner@kernel.org>,
	Milind Changire <mchangir@redhat.com>,
	Jeff Layton <jlayton@kernel.org>,
	ceph-devel@vger.kernel.org, linux-kernel@vger.kernel.org,
	Sam Edwards <CFSworks@gmail.com>
Subject: [PATCH 2/5] ceph: Remove error return from ceph_process_folio_batch()
Date: Tue, 30 Dec 2025 18:43:13 -0800	[thread overview]
Message-ID: <20251231024316.4643-3-CFSworks@gmail.com> (raw)
In-Reply-To: <20251231024316.4643-1-CFSworks@gmail.com>

Following the previous patch, ceph_process_folio_batch() no longer
returns errors because the writeback loop cannot handle them.

Since this function already indicates failure to lock any pages by
leaving `ceph_wbc.locked_pages == 0`, and the writeback loop has no way
to handle abandonment of a locked batch, change the return type of
ceph_process_folio_batch() to `void` and remove the pathological goto in
the writeback loop. The lack of a return code emphasizes that
ceph_process_folio_batch() is designed to be abort-free: that is, once
it commits a folio for writeback, it will not later abandon it or
propagate an error for that folio.

Signed-off-by: Sam Edwards <CFSworks@gmail.com>
---
 fs/ceph/addr.c | 17 +++++------------
 1 file changed, 5 insertions(+), 12 deletions(-)

diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index 3462df35d245..2b722916fb9b 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -1283,16 +1283,16 @@ static inline int move_dirty_folio_in_page_array(struct address_space *mapping,
 }
 
 static
-int ceph_process_folio_batch(struct address_space *mapping,
-			     struct writeback_control *wbc,
-			     struct ceph_writeback_ctl *ceph_wbc)
+void ceph_process_folio_batch(struct address_space *mapping,
+			      struct writeback_control *wbc,
+			      struct ceph_writeback_ctl *ceph_wbc)
 {
 	struct inode *inode = mapping->host;
 	struct ceph_fs_client *fsc = ceph_inode_to_fs_client(inode);
 	struct ceph_client *cl = fsc->client;
 	struct folio *folio = NULL;
 	unsigned i;
-	int rc = 0;
+	int rc;
 
 	for (i = 0; can_next_page_be_processed(ceph_wbc, i); i++) {
 		folio = ceph_wbc->fbatch.folios[i];
@@ -1322,12 +1322,10 @@ int ceph_process_folio_batch(struct address_space *mapping,
 		rc = ceph_check_page_before_write(mapping, wbc,
 						  ceph_wbc, folio);
 		if (rc == -ENODATA) {
-			rc = 0;
 			folio_unlock(folio);
 			ceph_wbc->fbatch.folios[i] = NULL;
 			continue;
 		} else if (rc == -E2BIG) {
-			rc = 0;
 			folio_unlock(folio);
 			ceph_wbc->fbatch.folios[i] = NULL;
 			break;
@@ -1369,7 +1367,6 @@ int ceph_process_folio_batch(struct address_space *mapping,
 		rc = move_dirty_folio_in_page_array(mapping, wbc, ceph_wbc,
 				folio);
 		if (rc) {
-			rc = 0;
 			folio_redirty_for_writepage(wbc, folio);
 			folio_unlock(folio);
 			break;
@@ -1380,8 +1377,6 @@ int ceph_process_folio_batch(struct address_space *mapping,
 	}
 
 	ceph_wbc->processed_in_fbatch = i;
-
-	return rc;
 }
 
 static inline
@@ -1685,10 +1680,8 @@ static int ceph_writepages_start(struct address_space *mapping,
 			break;
 
 process_folio_batch:
-		rc = ceph_process_folio_batch(mapping, wbc, &ceph_wbc);
+		ceph_process_folio_batch(mapping, wbc, &ceph_wbc);
 		ceph_shift_unused_folios_left(&ceph_wbc.fbatch);
-		if (rc)
-			goto release_folios;
 
 		/* did we get anything? */
 		if (!ceph_wbc.locked_pages)
-- 
2.51.2


  parent reply	other threads:[~2025-12-31  2:55 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-12-31  2:43 [PATCH 0/5] ceph: CephFS writeback correctness and performance fixes Sam Edwards
2025-12-31  2:43 ` [PATCH 1/5] ceph: Do not propagate page array emplacement errors as batch errors Sam Edwards
2026-01-05 20:23   ` Viacheslav Dubeyko
2026-01-06  6:52     ` Sam Edwards
2026-01-06 21:08       ` Viacheslav Dubeyko
2026-01-06 23:50         ` Sam Edwards
2025-12-31  2:43 ` Sam Edwards [this message]
2026-01-05 20:36   ` [PATCH 2/5] ceph: Remove error return from ceph_process_folio_batch() Viacheslav Dubeyko
2026-01-06  6:52     ` Sam Edwards
2026-01-06 22:47       ` Viacheslav Dubeyko
2026-01-07  0:15         ` Sam Edwards
2025-12-31  2:43 ` [PATCH 3/5] ceph: Free page array when ceph_submit_write fails Sam Edwards
2026-01-05 21:09   ` Viacheslav Dubeyko
2026-01-06  6:52     ` Sam Edwards
2025-12-31  2:43 ` [PATCH 4/5] ceph: Assert writeback loop invariants Sam Edwards
2026-01-05 22:28   ` Viacheslav Dubeyko
2026-01-06  6:53     ` Sam Edwards
2026-01-06 23:00       ` Viacheslav Dubeyko
2026-01-07  0:33         ` Sam Edwards
2025-12-31  2:43 ` [PATCH 5/5] ceph: Fix write storm on fscrypted files Sam Edwards
2026-01-05 22:34   ` Viacheslav Dubeyko
2026-01-06  6:53     ` Sam Edwards
2026-01-06 23:11       ` Viacheslav Dubeyko
2026-01-07  0:05         ` Sam Edwards

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251231024316.4643-3-CFSworks@gmail.com \
    --to=cfsworks@gmail.com \
    --cc=Slava.Dubeyko@ibm.com \
    --cc=brauner@kernel.org \
    --cc=ceph-devel@vger.kernel.org \
    --cc=idryomov@gmail.com \
    --cc=jlayton@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mchangir@redhat.com \
    --cc=xiubli@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox