From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
To: Viacheslav Dubeyko <Slava.Dubeyko@ibm.com>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>,
ceph-devel@vger.kernel.org, linux-fsdevel@vger.kernel.org,
David Howells <dhowells@redhat.com>
Subject: [PATCH v3 7/9] ceph: Remove uses of page from ceph_process_folio_batch()
Date: Mon, 17 Feb 2025 18:51:15 +0000 [thread overview]
Message-ID: <20250217185119.430193-8-willy@infradead.org> (raw)
In-Reply-To: <20250217185119.430193-1-willy@infradead.org>
Remove uses of page->index and deprecated page APIs. Saves a lot of
hidden calls to compound_head().
Also convert is_page_index_contiguous() to is_folio_index_contiguous()
and make its arguments const.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
fs/ceph/addr.c | 47 ++++++++++++++++++++++-------------------------
1 file changed, 22 insertions(+), 25 deletions(-)
diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index 90d154bc4808..fd46eab12ded 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -1226,10 +1226,10 @@ void ceph_allocate_page_array(struct address_space *mapping,
}
static inline
-bool is_page_index_contiguous(struct ceph_writeback_ctl *ceph_wbc,
- struct page *page)
+bool is_folio_index_contiguous(const struct ceph_writeback_ctl *ceph_wbc,
+ const struct folio *folio)
{
- return page->index == (ceph_wbc->offset + ceph_wbc->len) >> PAGE_SHIFT;
+ return folio->index == (ceph_wbc->offset + ceph_wbc->len) >> PAGE_SHIFT;
}
static inline
@@ -1294,7 +1294,6 @@ int ceph_process_folio_batch(struct address_space *mapping,
struct ceph_fs_client *fsc = ceph_inode_to_fs_client(inode);
struct ceph_client *cl = fsc->client;
struct folio *folio = NULL;
- struct page *page = NULL;
unsigned i;
int rc = 0;
@@ -1304,11 +1303,9 @@ int ceph_process_folio_batch(struct address_space *mapping,
if (!folio)
continue;
- page = &folio->page;
-
doutc(cl, "? %p idx %lu, folio_test_writeback %#x, "
"folio_test_dirty %#x, folio_test_locked %#x\n",
- page, page->index, folio_test_writeback(folio),
+ folio, folio->index, folio_test_writeback(folio),
folio_test_dirty(folio),
folio_test_locked(folio));
@@ -1321,27 +1318,27 @@ int ceph_process_folio_batch(struct address_space *mapping,
}
if (ceph_wbc->locked_pages == 0)
- lock_page(page); /* first page */
- else if (!trylock_page(page))
+ folio_lock(folio);
+ else if (!folio_trylock(folio))
break;
rc = ceph_check_page_before_write(mapping, wbc,
ceph_wbc, folio);
if (rc == -ENODATA) {
rc = 0;
- unlock_page(page);
+ folio_unlock(folio);
ceph_wbc->fbatch.folios[i] = NULL;
continue;
} else if (rc == -E2BIG) {
rc = 0;
- unlock_page(page);
+ folio_unlock(folio);
ceph_wbc->fbatch.folios[i] = NULL;
break;
}
- if (!clear_page_dirty_for_io(page)) {
- doutc(cl, "%p !clear_page_dirty_for_io\n", page);
- unlock_page(page);
+ if (!folio_clear_dirty_for_io(folio)) {
+ doutc(cl, "%p !folio_clear_dirty_for_io\n", folio);
+ folio_unlock(folio);
ceph_wbc->fbatch.folios[i] = NULL;
continue;
}
@@ -1353,35 +1350,35 @@ int ceph_process_folio_batch(struct address_space *mapping,
* allocate a page array
*/
if (ceph_wbc->locked_pages == 0) {
- ceph_allocate_page_array(mapping, ceph_wbc, page);
- } else if (!is_page_index_contiguous(ceph_wbc, page)) {
+ ceph_allocate_page_array(mapping, ceph_wbc, &folio->page);
+ } else if (!is_folio_index_contiguous(ceph_wbc, folio)) {
if (is_num_ops_too_big(ceph_wbc)) {
- redirty_page_for_writepage(wbc, page);
- unlock_page(page);
+ folio_redirty_for_writepage(wbc, folio);
+ folio_unlock(folio);
break;
}
ceph_wbc->num_ops++;
- ceph_wbc->offset = (u64)page_offset(page);
+ ceph_wbc->offset = (u64)folio_pos(folio);
ceph_wbc->len = 0;
}
/* note position of first page in fbatch */
- doutc(cl, "%llx.%llx will write page %p idx %lu\n",
- ceph_vinop(inode), page, page->index);
+ doutc(cl, "%llx.%llx will write folio %p idx %lu\n",
+ ceph_vinop(inode), folio, folio->index);
fsc->write_congested = is_write_congestion_happened(fsc);
rc = ceph_move_dirty_page_in_page_array(mapping, wbc,
- ceph_wbc, page);
+ ceph_wbc, &folio->page);
if (rc) {
- redirty_page_for_writepage(wbc, page);
- unlock_page(page);
+ folio_redirty_for_writepage(wbc, folio);
+ folio_unlock(folio);
break;
}
ceph_wbc->fbatch.folios[i] = NULL;
- ceph_wbc->len += thp_size(page);
+ ceph_wbc->len += folio_size(folio);
}
ceph_wbc->processed_in_fbatch = i;
--
2.47.2
next prev parent reply other threads:[~2025-02-17 18:51 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-17 18:51 [PATCH v3 0/9] Remove accesses to page->index from ceph Matthew Wilcox (Oracle)
2025-02-17 18:51 ` [PATCH v3 1/9] ceph: Remove ceph_writepage() Matthew Wilcox (Oracle)
2025-02-17 18:51 ` [PATCH v3 2/9] ceph: Use a folio in ceph_page_mkwrite() Matthew Wilcox (Oracle)
2025-02-17 18:51 ` [PATCH v3 3/9] ceph: Convert ceph_find_incompatible() to take a folio Matthew Wilcox (Oracle)
2025-02-17 18:51 ` [PATCH v3 4/9] ceph: Convert ceph_readdir_cache_control to store " Matthew Wilcox (Oracle)
2025-02-17 18:51 ` [PATCH v3 5/9] ceph: Convert writepage_nounlock() to write_folio_nounlock() Matthew Wilcox (Oracle)
2025-02-17 18:51 ` [PATCH v3 6/9] ceph: Convert ceph_check_page_before_write() to use a folio Matthew Wilcox (Oracle)
2025-02-17 18:51 ` Matthew Wilcox (Oracle) [this message]
2025-02-17 18:51 ` [PATCH v3 8/9] ceph: Convert ceph_move_dirty_page_in_page_array() to move_dirty_folio_in_page_array() Matthew Wilcox (Oracle)
2025-02-17 18:51 ` [PATCH v3 9/9] ceph: Pass a folio to ceph_allocate_page_array() Matthew Wilcox (Oracle)
2025-02-18 0:34 ` [PATCH v3 0/9] Remove accesses to page->index from ceph Viacheslav Dubeyko
2025-02-21 20:44 ` [PATCH v3 10/9] fs: Remove page_mkwrite_check_truncate() Matthew Wilcox (Oracle)
2025-02-21 20:45 ` Matthew Wilcox
2025-02-24 20:11 ` Viacheslav Dubeyko
2025-02-24 20:19 ` Matthew Wilcox
2025-02-24 20:39 ` Viacheslav Dubeyko
2025-02-24 22:24 ` Viacheslav Dubeyko
2025-02-28 10:22 ` [PATCH v3 0/9] Remove accesses to page->index from ceph Christian Brauner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250217185119.430193-8-willy@infradead.org \
--to=willy@infradead.org \
--cc=Slava.Dubeyko@ibm.com \
--cc=ceph-devel@vger.kernel.org \
--cc=dhowells@redhat.com \
--cc=linux-fsdevel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).