* [PATCH 01/46] btrfs: convert btrfs_readahead to only use folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
@ 2024-07-26 19:35 ` Josef Bacik
2024-07-26 19:35 ` [PATCH 02/46] btrfs: convert btrfs_read_folio to only use a folio Josef Bacik
` (46 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:35 UTC (permalink / raw)
To: linux-btrfs, kernel-team
We're the only user of readahead_page_batch(). Convert btrfs_readahead
to use the folio based helpers to do readahead.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/extent_io.c | 36 ++++++++----------------------------
1 file changed, 8 insertions(+), 28 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index aa7f8148cd0d..a4d5f8d00f04 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -1176,26 +1176,6 @@ int btrfs_read_folio(struct file *file, struct folio *folio)
return ret;
}
-static inline void contiguous_readpages(struct page *pages[], int nr_pages,
- u64 start, u64 end,
- struct extent_map **em_cached,
- struct btrfs_bio_ctrl *bio_ctrl,
- u64 *prev_em_start)
-{
- struct btrfs_inode *inode = page_to_inode(pages[0]);
- int index;
-
- ASSERT(em_cached);
-
- btrfs_lock_and_flush_ordered_range(inode, start, end, NULL);
-
- for (index = 0; index < nr_pages; index++) {
- btrfs_do_readpage(pages[index], em_cached, bio_ctrl,
- prev_em_start);
- put_page(pages[index]);
- }
-}
-
/*
* helper for __extent_writepage, doing all of the delayed allocation setup.
*
@@ -2379,18 +2359,18 @@ int btrfs_writepages(struct address_space *mapping, struct writeback_control *wb
void btrfs_readahead(struct readahead_control *rac)
{
struct btrfs_bio_ctrl bio_ctrl = { .opf = REQ_OP_READ | REQ_RAHEAD };
- struct page *pagepool[16];
+ struct btrfs_inode *inode = BTRFS_I(rac->mapping->host);
+ struct folio *folio;
+ u64 start = readahead_pos(rac);
+ u64 end = start + readahead_length(rac) - 1;
struct extent_map *em_cached = NULL;
u64 prev_em_start = (u64)-1;
- int nr;
- while ((nr = readahead_page_batch(rac, pagepool))) {
- u64 contig_start = readahead_pos(rac);
- u64 contig_end = contig_start + readahead_batch_length(rac) - 1;
+ btrfs_lock_and_flush_ordered_range(inode, start, end, NULL);
- contiguous_readpages(pagepool, nr, contig_start, contig_end,
- &em_cached, &bio_ctrl, &prev_em_start);
- }
+ while ((folio = readahead_folio(rac)) != NULL)
+ btrfs_do_readpage(&folio->page, &em_cached, &bio_ctrl,
+ &prev_em_start);
if (em_cached)
free_extent_map(em_cached);
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 02/46] btrfs: convert btrfs_read_folio to only use a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
2024-07-26 19:35 ` [PATCH 01/46] btrfs: convert btrfs_readahead to only use folio Josef Bacik
@ 2024-07-26 19:35 ` Josef Bacik
2024-07-26 19:35 ` [PATCH 03/46] btrfs: convert end_page_read to take " Josef Bacik
` (45 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:35 UTC (permalink / raw)
To: linux-btrfs, kernel-team
Currently we're using the page for everything here. Convert this to use
the folio helpers instead.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/extent_io.c | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index a4d5f8d00f04..4b7d1881d023 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -1155,17 +1155,16 @@ static int btrfs_do_readpage(struct page *page, struct extent_map **em_cached,
int btrfs_read_folio(struct file *file, struct folio *folio)
{
- struct page *page = &folio->page;
- struct btrfs_inode *inode = page_to_inode(page);
- u64 start = page_offset(page);
- u64 end = start + PAGE_SIZE - 1;
+ struct btrfs_inode *inode = folio_to_inode(folio);
+ u64 start = folio_pos(folio);
+ u64 end = start + folio_size(folio) - 1;
struct btrfs_bio_ctrl bio_ctrl = { .opf = REQ_OP_READ };
struct extent_map *em_cached = NULL;
int ret;
btrfs_lock_and_flush_ordered_range(inode, start, end, NULL);
- ret = btrfs_do_readpage(page, &em_cached, &bio_ctrl, NULL);
+ ret = btrfs_do_readpage(&folio->page, &em_cached, &bio_ctrl, NULL);
free_extent_map(em_cached);
/*
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 03/46] btrfs: convert end_page_read to take a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
2024-07-26 19:35 ` [PATCH 01/46] btrfs: convert btrfs_readahead to only use folio Josef Bacik
2024-07-26 19:35 ` [PATCH 02/46] btrfs: convert btrfs_read_folio to only use a folio Josef Bacik
@ 2024-07-26 19:35 ` Josef Bacik
2024-07-26 19:35 ` [PATCH 04/46] btrfs: convert begin_page_folio to take a folio instead Josef Bacik
` (44 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:35 UTC (permalink / raw)
To: linux-btrfs, kernel-team
We have this helper function to set the page range uptodate once we're
done reading it, as well as run fsverity against it. Half of these
functions already take a folio, just rename this to end_folio_read and
then rework it to take a folio instead, and update everything
accordingly.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/extent_io.c | 39 ++++++++++++++++++++-------------------
1 file changed, 20 insertions(+), 19 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 4b7d1881d023..2d6b1bc74109 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -406,30 +406,31 @@ void extent_clear_unlock_delalloc(struct btrfs_inode *inode, u64 start, u64 end,
start, end, page_ops);
}
-static bool btrfs_verify_page(struct page *page, u64 start)
+static bool btrfs_verify_folio(struct folio *folio, u64 start, u32 len)
{
- if (!fsverity_active(page->mapping->host) ||
- PageUptodate(page) ||
- start >= i_size_read(page->mapping->host))
+ struct btrfs_fs_info *fs_info = folio_to_fs_info(folio);
+
+ if (!fsverity_active(folio->mapping->host) ||
+ btrfs_folio_test_uptodate(fs_info, folio, start, len) ||
+ start >= i_size_read(folio->mapping->host))
return true;
- return fsverity_verify_page(page);
+ return fsverity_verify_folio(folio);
}
-static void end_page_read(struct page *page, bool uptodate, u64 start, u32 len)
+static void end_folio_read(struct folio *folio, bool uptodate, u64 start, u32 len)
{
- struct btrfs_fs_info *fs_info = page_to_fs_info(page);
- struct folio *folio = page_folio(page);
+ struct btrfs_fs_info *fs_info = folio_to_fs_info(folio);
- ASSERT(page_offset(page) <= start &&
- start + len <= page_offset(page) + PAGE_SIZE);
+ ASSERT(folio_pos(folio) <= start &&
+ start + len <= folio_pos(folio) + PAGE_SIZE);
- if (uptodate && btrfs_verify_page(page, start))
+ if (uptodate && btrfs_verify_folio(folio, start, len))
btrfs_folio_set_uptodate(fs_info, folio, start, len);
else
btrfs_folio_clear_uptodate(fs_info, folio, start, len);
- if (!btrfs_is_subpage(fs_info, page->mapping))
- unlock_page(page);
+ if (!btrfs_is_subpage(fs_info, folio->mapping))
+ folio_unlock(folio);
else
btrfs_subpage_end_reader(fs_info, folio, start, len);
}
@@ -642,7 +643,7 @@ static void end_bbio_data_read(struct btrfs_bio *bbio)
}
/* Update page status and unlock. */
- end_page_read(folio_page(folio, 0), uptodate, start, len);
+ end_folio_read(folio, uptodate, start, len);
endio_readpage_release_extent(&processed, BTRFS_I(inode),
start, end, uptodate);
}
@@ -1048,13 +1049,13 @@ static int btrfs_do_readpage(struct page *page, struct extent_map **em_cached,
iosize = PAGE_SIZE - pg_offset;
memzero_page(page, pg_offset, iosize);
unlock_extent(tree, cur, cur + iosize - 1, NULL);
- end_page_read(page, true, cur, iosize);
+ end_folio_read(page_folio(page), true, cur, iosize);
break;
}
em = __get_extent_map(inode, page, cur, end - cur + 1, em_cached);
if (IS_ERR(em)) {
unlock_extent(tree, cur, end, NULL);
- end_page_read(page, false, cur, end + 1 - cur);
+ end_folio_read(page_folio(page), false, cur, end + 1 - cur);
return PTR_ERR(em);
}
extent_offset = cur - em->start;
@@ -1123,7 +1124,7 @@ static int btrfs_do_readpage(struct page *page, struct extent_map **em_cached,
memzero_page(page, pg_offset, iosize);
unlock_extent(tree, cur, cur + iosize - 1, NULL);
- end_page_read(page, true, cur, iosize);
+ end_folio_read(page_folio(page), true, cur, iosize);
cur = cur + iosize;
pg_offset += iosize;
continue;
@@ -1131,7 +1132,7 @@ static int btrfs_do_readpage(struct page *page, struct extent_map **em_cached,
/* the get_extent function already copied into the page */
if (block_start == EXTENT_MAP_INLINE) {
unlock_extent(tree, cur, cur + iosize - 1, NULL);
- end_page_read(page, true, cur, iosize);
+ end_folio_read(page_folio(page), true, cur, iosize);
cur = cur + iosize;
pg_offset += iosize;
continue;
@@ -2551,7 +2552,7 @@ static bool folio_range_has_eb(struct btrfs_fs_info *fs_info, struct folio *foli
return true;
/*
* Even there is no eb refs here, we may still have
- * end_page_read() call relying on page::private.
+ * end_folio_read() call relying on page::private.
*/
if (atomic_read(&subpage->readers))
return true;
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 04/46] btrfs: convert begin_page_folio to take a folio instead
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (2 preceding siblings ...)
2024-07-26 19:35 ` [PATCH 03/46] btrfs: convert end_page_read to take " Josef Bacik
@ 2024-07-26 19:35 ` Josef Bacik
2024-07-26 19:35 ` [PATCH 05/46] btrfs: convert submit_extent_page to use a folio Josef Bacik
` (43 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:35 UTC (permalink / raw)
To: linux-btrfs, kernel-team
This already uses a folio internally, change it to take a folio as an
argument instead.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/extent_io.c | 8 +++-----
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 2d6b1bc74109..89938800f37a 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -551,16 +551,14 @@ static void endio_readpage_release_extent(struct processed_extent *processed,
processed->uptodate = uptodate;
}
-static void begin_page_read(struct btrfs_fs_info *fs_info, struct page *page)
+static void begin_folio_read(struct btrfs_fs_info *fs_info, struct folio *folio)
{
- struct folio *folio = page_folio(page);
-
ASSERT(folio_test_locked(folio));
if (!btrfs_is_subpage(fs_info, folio->mapping))
return;
ASSERT(folio_test_private(folio));
- btrfs_subpage_start_reader(fs_info, folio, page_offset(page), PAGE_SIZE);
+ btrfs_subpage_start_reader(fs_info, folio, folio_pos(folio), PAGE_SIZE);
}
/*
@@ -1038,7 +1036,7 @@ static int btrfs_do_readpage(struct page *page, struct extent_map **em_cached,
}
}
bio_ctrl->end_io_func = end_bbio_data_read;
- begin_page_read(fs_info, page);
+ begin_folio_read(fs_info, page_folio(page));
while (cur <= end) {
enum btrfs_compression_type compress_type = BTRFS_COMPRESS_NONE;
bool force_bio_submit = false;
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 05/46] btrfs: convert submit_extent_page to use a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (3 preceding siblings ...)
2024-07-26 19:35 ` [PATCH 04/46] btrfs: convert begin_page_folio to take a folio instead Josef Bacik
@ 2024-07-26 19:35 ` Josef Bacik
2024-07-26 19:35 ` [PATCH 06/46] btrfs: convert btrfs_do_readpage to only " Josef Bacik
` (42 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:35 UTC (permalink / raw)
To: linux-btrfs, kernel-team
The callers of this helper are going to be converted to using a folio,
so adjust submit_extent_page to become submit_extent_folio and update it
to use all the relevant folio helpers.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/extent_io.c | 42 ++++++++++++++++++++++--------------------
1 file changed, 22 insertions(+), 20 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 89938800f37a..612855e17d04 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -736,12 +736,13 @@ static int alloc_eb_folio_array(struct extent_buffer *eb, bool nofail)
}
static bool btrfs_bio_is_contig(struct btrfs_bio_ctrl *bio_ctrl,
- struct page *page, u64 disk_bytenr,
+ struct folio *folio, u64 disk_bytenr,
unsigned int pg_offset)
{
struct bio *bio = &bio_ctrl->bbio->bio;
struct bio_vec *bvec = bio_last_bvec_all(bio);
const sector_t sector = disk_bytenr >> SECTOR_SHIFT;
+ struct folio *bv_folio = page_folio(bvec->bv_page);
if (bio_ctrl->compress_type != BTRFS_COMPRESS_NONE) {
/*
@@ -754,7 +755,7 @@ static bool btrfs_bio_is_contig(struct btrfs_bio_ctrl *bio_ctrl,
/*
* The contig check requires the following conditions to be met:
*
- * 1) The pages are belonging to the same inode
+ * 1) The folios are belonging to the same inode
* This is implied by the call chain.
*
* 2) The range has adjacent logical bytenr
@@ -763,8 +764,8 @@ static bool btrfs_bio_is_contig(struct btrfs_bio_ctrl *bio_ctrl,
* This is required for the usage of btrfs_bio->file_offset.
*/
return bio_end_sector(bio) == sector &&
- page_offset(bvec->bv_page) + bvec->bv_offset + bvec->bv_len ==
- page_offset(page) + pg_offset;
+ folio_pos(bv_folio) + bvec->bv_offset + bvec->bv_len ==
+ folio_pos(folio) + pg_offset;
}
static void alloc_new_bio(struct btrfs_inode *inode,
@@ -817,17 +818,17 @@ static void alloc_new_bio(struct btrfs_inode *inode,
* The mirror number for this IO should already be initizlied in
* @bio_ctrl->mirror_num.
*/
-static void submit_extent_page(struct btrfs_bio_ctrl *bio_ctrl,
- u64 disk_bytenr, struct page *page,
+static void submit_extent_folio(struct btrfs_bio_ctrl *bio_ctrl,
+ u64 disk_bytenr, struct folio *folio,
size_t size, unsigned long pg_offset)
{
- struct btrfs_inode *inode = page_to_inode(page);
+ struct btrfs_inode *inode = folio_to_inode(folio);
ASSERT(pg_offset + size <= PAGE_SIZE);
ASSERT(bio_ctrl->end_io_func);
if (bio_ctrl->bbio &&
- !btrfs_bio_is_contig(bio_ctrl, page, disk_bytenr, pg_offset))
+ !btrfs_bio_is_contig(bio_ctrl, folio, disk_bytenr, pg_offset))
submit_one_bio(bio_ctrl);
do {
@@ -836,7 +837,7 @@ static void submit_extent_page(struct btrfs_bio_ctrl *bio_ctrl,
/* Allocate new bio if needed */
if (!bio_ctrl->bbio) {
alloc_new_bio(inode, bio_ctrl, disk_bytenr,
- page_offset(page) + pg_offset);
+ folio_pos(folio) + pg_offset);
}
/* Cap to the current ordered extent boundary if there is one. */
@@ -846,21 +847,22 @@ static void submit_extent_page(struct btrfs_bio_ctrl *bio_ctrl,
len = bio_ctrl->len_to_oe_boundary;
}
- if (bio_add_page(&bio_ctrl->bbio->bio, page, len, pg_offset) != len) {
+ if (!bio_add_folio(&bio_ctrl->bbio->bio, folio, len, pg_offset)) {
/* bio full: move on to a new one */
submit_one_bio(bio_ctrl);
continue;
}
if (bio_ctrl->wbc)
- wbc_account_cgroup_owner(bio_ctrl->wbc, page, len);
+ wbc_account_cgroup_owner(bio_ctrl->wbc, &folio->page,
+ len);
size -= len;
pg_offset += len;
disk_bytenr += len;
/*
- * len_to_oe_boundary defaults to U32_MAX, which isn't page or
+ * len_to_oe_boundary defaults to U32_MAX, which isn't folio or
* sector aligned. alloc_new_bio() then sets it to the end of
* our ordered extent for writes into zoned devices.
*
@@ -870,15 +872,15 @@ static void submit_extent_page(struct btrfs_bio_ctrl *bio_ctrl,
* boundary is correct.
*
* When len_to_oe_boundary is U32_MAX, the cap above would
- * result in a 4095 byte IO for the last page right before
- * we hit the bio limit of UINT_MAX. bio_add_page() has all
+ * result in a 4095 byte IO for the last folio right before
+ * we hit the bio limit of UINT_MAX. bio_add_folio() has all
* the checks required to make sure we don't overflow the bio,
* and we should just ignore len_to_oe_boundary completely
* unless we're using it to track an ordered extent.
*
* It's pretty hard to make a bio sized U32_MAX, but it can
* happen when the page cache is able to feed us contiguous
- * pages for large extents.
+ * folios for large extents.
*/
if (bio_ctrl->len_to_oe_boundary != U32_MAX)
bio_ctrl->len_to_oe_boundary -= len;
@@ -1143,8 +1145,8 @@ static int btrfs_do_readpage(struct page *page, struct extent_map **em_cached,
if (force_bio_submit)
submit_one_bio(bio_ctrl);
- submit_extent_page(bio_ctrl, disk_bytenr, page, iosize,
- pg_offset);
+ submit_extent_folio(bio_ctrl, disk_bytenr, page_folio(page),
+ iosize, pg_offset);
cur = cur + iosize;
pg_offset += iosize;
}
@@ -1489,8 +1491,8 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode,
*/
btrfs_folio_clear_dirty(fs_info, page_folio(page), cur, iosize);
- submit_extent_page(bio_ctrl, disk_bytenr, page, iosize,
- cur - page_offset(page));
+ submit_extent_folio(bio_ctrl, disk_bytenr, page_folio(page),
+ iosize, cur - page_offset(page));
cur += iosize;
nr++;
}
@@ -2087,7 +2089,7 @@ int btree_write_cache_pages(struct address_space *mapping,
* extent io tree. Thus we don't want to submit such wild eb
* if the fs already has error.
*
- * We can get ret > 0 from submit_extent_page() indicating how many ebs
+ * We can get ret > 0 from submit_extent_folio() indicating how many ebs
* were submitted. Reset it to 0 to avoid false alerts for the caller.
*/
if (ret > 0)
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 06/46] btrfs: convert btrfs_do_readpage to only use a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (4 preceding siblings ...)
2024-07-26 19:35 ` [PATCH 05/46] btrfs: convert submit_extent_page to use a folio Josef Bacik
@ 2024-07-26 19:35 ` Josef Bacik
2024-07-26 19:35 ` [PATCH 07/46] btrfs: update the writepage tracepoint to take " Josef Bacik
` (41 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:35 UTC (permalink / raw)
To: linux-btrfs, kernel-team
Now that the callers and helpers mostly use folio, convert
btrfs_do_readpage to take a folio, and rename it to btrfs_do_read_folio.
Update all of the page stuff to use the folio based helpers instead.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/extent_io.c | 58 ++++++++++++++++++++++----------------------
1 file changed, 29 insertions(+), 29 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 612855e17d04..973028a9ba3f 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -1004,12 +1004,12 @@ static struct extent_map *__get_extent_map(struct inode *inode, struct page *pag
* XXX JDM: This needs looking at to ensure proper page locking
* return 0 on success, otherwise return error
*/
-static int btrfs_do_readpage(struct page *page, struct extent_map **em_cached,
+static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
struct btrfs_bio_ctrl *bio_ctrl, u64 *prev_em_start)
{
- struct inode *inode = page->mapping->host;
+ struct inode *inode = folio->mapping->host;
struct btrfs_fs_info *fs_info = inode_to_fs_info(inode);
- u64 start = page_offset(page);
+ u64 start = folio_pos(folio);
const u64 end = start + PAGE_SIZE - 1;
u64 cur = start;
u64 extent_offset;
@@ -1022,23 +1022,23 @@ static int btrfs_do_readpage(struct page *page, struct extent_map **em_cached,
size_t blocksize = fs_info->sectorsize;
struct extent_io_tree *tree = &BTRFS_I(inode)->io_tree;
- ret = set_page_extent_mapped(page);
+ ret = set_folio_extent_mapped(folio);
if (ret < 0) {
unlock_extent(tree, start, end, NULL);
- unlock_page(page);
+ folio_unlock(folio);
return ret;
}
- if (page->index == last_byte >> PAGE_SHIFT) {
- size_t zero_offset = offset_in_page(last_byte);
+ if (folio->index == last_byte >> folio_shift(folio)) {
+ size_t zero_offset = offset_in_folio(folio, last_byte);
if (zero_offset) {
- iosize = PAGE_SIZE - zero_offset;
- memzero_page(page, zero_offset, iosize);
+ iosize = folio_size(folio) - zero_offset;
+ folio_zero_range(folio, zero_offset, iosize);
}
}
bio_ctrl->end_io_func = end_bbio_data_read;
- begin_folio_read(fs_info, page_folio(page));
+ begin_folio_read(fs_info, folio);
while (cur <= end) {
enum btrfs_compression_type compress_type = BTRFS_COMPRESS_NONE;
bool force_bio_submit = false;
@@ -1046,16 +1046,17 @@ static int btrfs_do_readpage(struct page *page, struct extent_map **em_cached,
ASSERT(IS_ALIGNED(cur, fs_info->sectorsize));
if (cur >= last_byte) {
- iosize = PAGE_SIZE - pg_offset;
- memzero_page(page, pg_offset, iosize);
+ iosize = folio_size(folio) - pg_offset;
+ folio_zero_range(folio, pg_offset, iosize);
unlock_extent(tree, cur, cur + iosize - 1, NULL);
- end_folio_read(page_folio(page), true, cur, iosize);
+ end_folio_read(folio, true, cur, iosize);
break;
}
- em = __get_extent_map(inode, page, cur, end - cur + 1, em_cached);
+ em = __get_extent_map(inode, folio_page(folio, 0), cur,
+ end - cur + 1, em_cached);
if (IS_ERR(em)) {
unlock_extent(tree, cur, end, NULL);
- end_folio_read(page_folio(page), false, cur, end + 1 - cur);
+ end_folio_read(folio, false, cur, end + 1 - cur);
return PTR_ERR(em);
}
extent_offset = cur - em->start;
@@ -1080,8 +1081,8 @@ static int btrfs_do_readpage(struct page *page, struct extent_map **em_cached,
* to the same compressed extent (possibly with a different
* offset and/or length, so it either points to the whole extent
* or only part of it), we must make sure we do not submit a
- * single bio to populate the pages for the 2 ranges because
- * this makes the compressed extent read zero out the pages
+ * single bio to populate the folios for the 2 ranges because
+ * this makes the compressed extent read zero out the folios
* belonging to the 2nd range. Imagine the following scenario:
*
* File layout
@@ -1094,13 +1095,13 @@ static int btrfs_do_readpage(struct page *page, struct extent_map **em_cached,
* [extent X, compressed length = 4K uncompressed length = 16K]
*
* If the bio to read the compressed extent covers both ranges,
- * it will decompress extent X into the pages belonging to the
+ * it will decompress extent X into the folios belonging to the
* first range and then it will stop, zeroing out the remaining
- * pages that belong to the other range that points to extent X.
+ * folios that belong to the other range that points to extent X.
* So here we make sure we submit 2 bios, one for the first
* range and another one for the third range. Both will target
* the same physical extent from disk, but we can't currently
- * make the compressed bio endio callback populate the pages
+ * make the compressed bio endio callback populate the folios
* for both ranges because each compressed bio is tightly
* coupled with a single extent map, and each range can have
* an extent map with a different offset value relative to the
@@ -1121,18 +1122,18 @@ static int btrfs_do_readpage(struct page *page, struct extent_map **em_cached,
/* we've found a hole, just zero and go on */
if (block_start == EXTENT_MAP_HOLE) {
- memzero_page(page, pg_offset, iosize);
+ folio_zero_range(folio, pg_offset, iosize);
unlock_extent(tree, cur, cur + iosize - 1, NULL);
- end_folio_read(page_folio(page), true, cur, iosize);
+ end_folio_read(folio, true, cur, iosize);
cur = cur + iosize;
pg_offset += iosize;
continue;
}
- /* the get_extent function already copied into the page */
+ /* the get_extent function already copied into the folio */
if (block_start == EXTENT_MAP_INLINE) {
unlock_extent(tree, cur, cur + iosize - 1, NULL);
- end_folio_read(page_folio(page), true, cur, iosize);
+ end_folio_read(folio, true, cur, iosize);
cur = cur + iosize;
pg_offset += iosize;
continue;
@@ -1145,8 +1146,8 @@ static int btrfs_do_readpage(struct page *page, struct extent_map **em_cached,
if (force_bio_submit)
submit_one_bio(bio_ctrl);
- submit_extent_folio(bio_ctrl, disk_bytenr, page_folio(page),
- iosize, pg_offset);
+ submit_extent_folio(bio_ctrl, disk_bytenr, folio, iosize,
+ pg_offset);
cur = cur + iosize;
pg_offset += iosize;
}
@@ -1165,7 +1166,7 @@ int btrfs_read_folio(struct file *file, struct folio *folio)
btrfs_lock_and_flush_ordered_range(inode, start, end, NULL);
- ret = btrfs_do_readpage(&folio->page, &em_cached, &bio_ctrl, NULL);
+ ret = btrfs_do_readpage(folio, &em_cached, &bio_ctrl, NULL);
free_extent_map(em_cached);
/*
@@ -2369,8 +2370,7 @@ void btrfs_readahead(struct readahead_control *rac)
btrfs_lock_and_flush_ordered_range(inode, start, end, NULL);
while ((folio = readahead_folio(rac)) != NULL)
- btrfs_do_readpage(&folio->page, &em_cached, &bio_ctrl,
- &prev_em_start);
+ btrfs_do_readpage(folio, &em_cached, &bio_ctrl, &prev_em_start);
if (em_cached)
free_extent_map(em_cached);
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 07/46] btrfs: update the writepage tracepoint to take a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (5 preceding siblings ...)
2024-07-26 19:35 ` [PATCH 06/46] btrfs: convert btrfs_do_readpage to only " Josef Bacik
@ 2024-07-26 19:35 ` Josef Bacik
2024-07-26 19:35 ` [PATCH 08/46] btrfs: convert __extent_writepage_io " Josef Bacik
` (40 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:35 UTC (permalink / raw)
To: linux-btrfs, kernel-team
Willy is wanting to get rid of page->index, convert the writepage
tracepoint to take a folio so we can do folio->index instead of
page->index.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/extent_io.c | 2 +-
include/trace/events/btrfs.h | 10 +++++-----
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 973028a9ba3f..eed2be8afb15 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -1531,7 +1531,7 @@ static int __extent_writepage(struct page *page, struct btrfs_bio_ctrl *bio_ctrl
loff_t i_size = i_size_read(inode);
unsigned long end_index = i_size >> PAGE_SHIFT;
- trace___extent_writepage(page, inode, bio_ctrl->wbc);
+ trace___extent_writepage(folio, inode, bio_ctrl->wbc);
WARN_ON(!PageLocked(page));
diff --git a/include/trace/events/btrfs.h b/include/trace/events/btrfs.h
index eeb56975bee7..3af681642652 100644
--- a/include/trace/events/btrfs.h
+++ b/include/trace/events/btrfs.h
@@ -674,10 +674,10 @@ TRACE_EVENT(btrfs_finish_ordered_extent,
DECLARE_EVENT_CLASS(btrfs__writepage,
- TP_PROTO(const struct page *page, const struct inode *inode,
+ TP_PROTO(const struct folio *folio, const struct inode *inode,
const struct writeback_control *wbc),
- TP_ARGS(page, inode, wbc),
+ TP_ARGS(folio, inode, wbc),
TP_STRUCT__entry_btrfs(
__field( u64, ino )
@@ -695,7 +695,7 @@ DECLARE_EVENT_CLASS(btrfs__writepage,
TP_fast_assign_btrfs(btrfs_sb(inode->i_sb),
__entry->ino = btrfs_ino(BTRFS_I(inode));
- __entry->index = page->index;
+ __entry->index = folio->index;
__entry->nr_to_write = wbc->nr_to_write;
__entry->pages_skipped = wbc->pages_skipped;
__entry->range_start = wbc->range_start;
@@ -723,10 +723,10 @@ DECLARE_EVENT_CLASS(btrfs__writepage,
DEFINE_EVENT(btrfs__writepage, __extent_writepage,
- TP_PROTO(const struct page *page, const struct inode *inode,
+ TP_PROTO(const struct folio *folio, const struct inode *inode,
const struct writeback_control *wbc),
- TP_ARGS(page, inode, wbc)
+ TP_ARGS(folio, inode, wbc)
);
TRACE_EVENT(btrfs_writepage_end_io_hook,
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 08/46] btrfs: convert __extent_writepage_io to take a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (6 preceding siblings ...)
2024-07-26 19:35 ` [PATCH 07/46] btrfs: update the writepage tracepoint to take " Josef Bacik
@ 2024-07-26 19:35 ` Josef Bacik
2024-07-26 19:35 ` [PATCH 09/46] btrfs: convert extent_write_locked_range to use folios Josef Bacik
` (39 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:35 UTC (permalink / raw)
To: linux-btrfs, kernel-team
__extent_writepage_io uses page everywhere, but a lot of these functions
take a folio. Convert it to use the folio based helpers, and then
change it to take a folio as an argument and update its callers.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/extent_io.c | 50 ++++++++++++++++++++++----------------------
1 file changed, 25 insertions(+), 25 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index eed2be8afb15..63ec7efd307f 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -1393,10 +1393,10 @@ static void find_next_dirty_byte(const struct btrfs_fs_info *fs_info,
* < 0 if there were errors (page still locked)
*/
static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode,
- struct page *page, u64 start, u32 len,
- struct btrfs_bio_ctrl *bio_ctrl,
- loff_t i_size,
- int *nr_ret)
+ struct folio *folio,
+ u64 start, u32 len,
+ struct btrfs_bio_ctrl *bio_ctrl,
+ loff_t i_size, int *nr_ret)
{
struct btrfs_fs_info *fs_info = inode->root->fs_info;
u64 cur = start;
@@ -1407,14 +1407,14 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode,
int ret = 0;
int nr = 0;
- ASSERT(start >= page_offset(page) &&
- start + len <= page_offset(page) + PAGE_SIZE);
+ ASSERT(start >= folio_pos(folio) &&
+ start + len <= folio_pos(folio) + folio_size(folio));
- ret = btrfs_writepage_cow_fixup(page);
+ ret = btrfs_writepage_cow_fixup(&folio->page);
if (ret) {
/* Fixup worker will requeue */
- redirty_page_for_writepage(bio_ctrl->wbc, page);
- unlock_page(page);
+ folio_redirty_for_writepage(bio_ctrl->wbc, folio);
+ folio_unlock(folio);
return 1;
}
@@ -1428,21 +1428,21 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode,
u32 iosize;
if (cur >= i_size) {
- btrfs_mark_ordered_io_finished(inode, page, cur, len,
- true);
+ btrfs_mark_ordered_io_finished(inode, &folio->page, cur,
+ len, true);
/*
* This range is beyond i_size, thus we don't need to
* bother writing back.
* But we still need to clear the dirty subpage bit, or
- * the next time the page gets dirtied, we will try to
+ * the next time the folio gets dirtied, we will try to
* writeback the sectors with subpage dirty bits,
* causing writeback without ordered extent.
*/
- btrfs_folio_clear_dirty(fs_info, page_folio(page), cur, len);
+ btrfs_folio_clear_dirty(fs_info, folio, cur, len);
break;
}
- find_next_dirty_byte(fs_info, page, &dirty_range_start,
+ find_next_dirty_byte(fs_info, &folio->page, &dirty_range_start,
&dirty_range_end);
if (cur < dirty_range_start) {
cur = dirty_range_start;
@@ -1478,33 +1478,33 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode,
em = NULL;
btrfs_set_range_writeback(inode, cur, cur + iosize - 1);
- if (!PageWriteback(page)) {
+ if (!folio_test_writeback(folio)) {
btrfs_err(inode->root->fs_info,
- "page %lu not writeback, cur %llu end %llu",
- page->index, cur, end);
+ "folio %lu not writeback, cur %llu end %llu",
+ folio->index, cur, end);
}
/*
* Although the PageDirty bit is cleared before entering this
* function, subpage dirty bit is not cleared.
* So clear subpage dirty bit here so next time we won't submit
- * page for range already written to disk.
+ * folio for range already written to disk.
*/
- btrfs_folio_clear_dirty(fs_info, page_folio(page), cur, iosize);
+ btrfs_folio_clear_dirty(fs_info, folio, cur, iosize);
- submit_extent_folio(bio_ctrl, disk_bytenr, page_folio(page),
- iosize, cur - page_offset(page));
+ submit_extent_folio(bio_ctrl, disk_bytenr, folio,
+ iosize, cur - folio_pos(folio));
cur += iosize;
nr++;
}
- btrfs_folio_assert_not_dirty(fs_info, page_folio(page), start, len);
+ btrfs_folio_assert_not_dirty(fs_info, folio, start, len);
*nr_ret = nr;
return 0;
out_error:
/*
- * If we finish without problem, we should not only clear page dirty,
+ * If we finish without problem, we should not only clear folio dirty,
* but also empty subpage dirty bits
*/
*nr_ret = nr;
@@ -1556,7 +1556,7 @@ static int __extent_writepage(struct page *page, struct btrfs_bio_ctrl *bio_ctrl
if (ret)
goto done;
- ret = __extent_writepage_io(BTRFS_I(inode), page, page_offset(page),
+ ret = __extent_writepage_io(BTRFS_I(inode), folio, folio_pos(folio),
PAGE_SIZE, bio_ctrl, i_size, &nr);
if (ret == 1)
return 0;
@@ -2308,7 +2308,7 @@ void extent_write_locked_range(struct inode *inode, const struct page *locked_pa
if (pages_dirty && page != locked_page)
ASSERT(PageDirty(page));
- ret = __extent_writepage_io(BTRFS_I(inode), page, cur, cur_len,
+ ret = __extent_writepage_io(BTRFS_I(inode), page_folio(page), cur, cur_len,
&bio_ctrl, i_size, &nr);
if (ret == 1)
goto next_page;
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 09/46] btrfs: convert extent_write_locked_range to use folios
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (7 preceding siblings ...)
2024-07-26 19:35 ` [PATCH 08/46] btrfs: convert __extent_writepage_io " Josef Bacik
@ 2024-07-26 19:35 ` Josef Bacik
2024-07-26 19:35 ` [PATCH 10/46] btrfs: convert __extent_writepage to be completely folio based Josef Bacik
` (38 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:35 UTC (permalink / raw)
To: linux-btrfs, kernel-team
Instead of using pages for everything, find a folio and use that. This
makes things a bit cleaner as a lot of the functions calls here all take
folios.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/extent_io.c | 36 +++++++++++++++++++++++-------------
1 file changed, 23 insertions(+), 13 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 63ec7efd307f..a04fc920b0e6 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -2300,37 +2300,47 @@ void extent_write_locked_range(struct inode *inode, const struct page *locked_pa
while (cur <= end) {
u64 cur_end = min(round_down(cur, PAGE_SIZE) + PAGE_SIZE - 1, end);
u32 cur_len = cur_end + 1 - cur;
- struct page *page;
+ struct folio *folio;
int nr = 0;
- page = find_get_page(mapping, cur >> PAGE_SHIFT);
- ASSERT(PageLocked(page));
- if (pages_dirty && page != locked_page)
- ASSERT(PageDirty(page));
+ folio = __filemap_get_folio(mapping, cur >> PAGE_SHIFT, 0, 0);
- ret = __extent_writepage_io(BTRFS_I(inode), page_folio(page), cur, cur_len,
+ /*
+ * This shouldn't happen, the pages are pinned and locked, this
+ * code is just in case, but shouldn't actually be run.
+ */
+ if (IS_ERR(folio)) {
+ btrfs_mark_ordered_io_finished(BTRFS_I(inode), NULL,
+ cur, cur_len, false);
+ mapping_set_error(mapping, PTR_ERR(folio));
+ cur = cur_end + 1;
+ continue;
+ }
+
+ ASSERT(folio_test_locked(folio));
+ if (pages_dirty && &folio->page != locked_page)
+ ASSERT(folio_test_dirty(folio));
+
+ ret = __extent_writepage_io(BTRFS_I(inode), folio, cur, cur_len,
&bio_ctrl, i_size, &nr);
if (ret == 1)
goto next_page;
/* Make sure the mapping tag for page dirty gets cleared. */
if (nr == 0) {
- struct folio *folio;
-
- folio = page_folio(page);
btrfs_folio_set_writeback(fs_info, folio, cur, cur_len);
btrfs_folio_clear_writeback(fs_info, folio, cur, cur_len);
}
if (ret) {
- btrfs_mark_ordered_io_finished(BTRFS_I(inode), page,
+ btrfs_mark_ordered_io_finished(BTRFS_I(inode), &folio->page,
cur, cur_len, !ret);
- mapping_set_error(page->mapping, ret);
+ mapping_set_error(mapping, ret);
}
- btrfs_folio_unlock_writer(fs_info, page_folio(page), cur, cur_len);
+ btrfs_folio_unlock_writer(fs_info, folio, cur, cur_len);
if (ret < 0)
found_error = true;
next_page:
- put_page(page);
+ folio_put(folio);
cur = cur_end + 1;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 10/46] btrfs: convert __extent_writepage to be completely folio based
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (8 preceding siblings ...)
2024-07-26 19:35 ` [PATCH 09/46] btrfs: convert extent_write_locked_range to use folios Josef Bacik
@ 2024-07-26 19:35 ` Josef Bacik
2024-07-26 19:35 ` [PATCH 11/46] btrfs: convert add_ra_bio_pages to use only folios Josef Bacik
` (37 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:35 UTC (permalink / raw)
To: linux-btrfs, kernel-team
Now that we've gotten most of the helpers updated to only take a folio,
update __extent_writepage to only deal in folios.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/extent_io.c | 35 +++++++++++++++++------------------
1 file changed, 17 insertions(+), 18 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index a04fc920b0e6..da60ec1e866a 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -1520,11 +1520,10 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode,
* Return 0 if everything goes well.
* Return <0 for error.
*/
-static int __extent_writepage(struct page *page, struct btrfs_bio_ctrl *bio_ctrl)
+static int __extent_writepage(struct folio *folio, struct btrfs_bio_ctrl *bio_ctrl)
{
- struct folio *folio = page_folio(page);
- struct inode *inode = page->mapping->host;
- const u64 page_start = page_offset(page);
+ struct inode *inode = folio->mapping->host;
+ const u64 page_start = folio_pos(folio);
int ret;
int nr = 0;
size_t pg_offset;
@@ -1533,24 +1532,24 @@ static int __extent_writepage(struct page *page, struct btrfs_bio_ctrl *bio_ctrl
trace___extent_writepage(folio, inode, bio_ctrl->wbc);
- WARN_ON(!PageLocked(page));
+ WARN_ON(!folio_test_locked(folio));
- pg_offset = offset_in_page(i_size);
- if (page->index > end_index ||
- (page->index == end_index && !pg_offset)) {
+ pg_offset = offset_in_folio(folio, i_size);
+ if (folio->index > end_index ||
+ (folio->index == end_index && !pg_offset)) {
folio_invalidate(folio, 0, folio_size(folio));
folio_unlock(folio);
return 0;
}
- if (page->index == end_index)
- memzero_page(page, pg_offset, PAGE_SIZE - pg_offset);
+ if (folio->index == end_index)
+ folio_zero_range(folio, pg_offset, folio_size(folio) - pg_offset);
- ret = set_page_extent_mapped(page);
+ ret = set_folio_extent_mapped(folio);
if (ret < 0)
goto done;
- ret = writepage_delalloc(BTRFS_I(inode), page, bio_ctrl->wbc);
+ ret = writepage_delalloc(BTRFS_I(inode), &folio->page, bio_ctrl->wbc);
if (ret == 1)
return 0;
if (ret)
@@ -1566,13 +1565,13 @@ static int __extent_writepage(struct page *page, struct btrfs_bio_ctrl *bio_ctrl
done:
if (nr == 0) {
/* make sure the mapping tag for page dirty gets cleared */
- set_page_writeback(page);
- end_page_writeback(page);
+ folio_start_writeback(folio);
+ folio_end_writeback(folio);
}
if (ret) {
- btrfs_mark_ordered_io_finished(BTRFS_I(inode), page, page_start,
- PAGE_SIZE, !ret);
- mapping_set_error(page->mapping, ret);
+ btrfs_mark_ordered_io_finished(BTRFS_I(inode), &folio->page,
+ page_start, PAGE_SIZE, !ret);
+ mapping_set_error(folio->mapping, ret);
}
btrfs_folio_end_all_writers(inode_to_fs_info(inode), folio);
@@ -2229,7 +2228,7 @@ static int extent_write_cache_pages(struct address_space *mapping,
continue;
}
- ret = __extent_writepage(&folio->page, bio_ctrl);
+ ret = __extent_writepage(folio, bio_ctrl);
if (ret < 0) {
done = 1;
break;
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 11/46] btrfs: convert add_ra_bio_pages to use only folios
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (9 preceding siblings ...)
2024-07-26 19:35 ` [PATCH 10/46] btrfs: convert __extent_writepage to be completely folio based Josef Bacik
@ 2024-07-26 19:35 ` Josef Bacik
2024-07-26 19:35 ` [PATCH 12/46] btrfs: utilize folio more in btrfs_page_mkwrite Josef Bacik
` (36 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:35 UTC (permalink / raw)
To: linux-btrfs, kernel-team
Willy is going to get rid of page->index, and add_ra_bio_pages uses
page->index. Make his life easier by converting add_ra_bio_pages to use
folios so that we are no longer using page->index.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/compression.c | 62 ++++++++++++++++++++++--------------------
1 file changed, 33 insertions(+), 29 deletions(-)
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index a8e2c461aff7..832ab8984c41 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -420,7 +420,7 @@ static noinline int add_ra_bio_pages(struct inode *inode,
u64 cur = cb->orig_bbio->file_offset + orig_bio->bi_iter.bi_size;
u64 isize = i_size_read(inode);
int ret;
- struct page *page;
+ struct folio *folio;
struct extent_map *em;
struct address_space *mapping = inode->i_mapping;
struct extent_map_tree *em_tree;
@@ -453,9 +453,13 @@ static noinline int add_ra_bio_pages(struct inode *inode,
if (pg_index > end_index)
break;
- page = xa_load(&mapping->i_pages, pg_index);
- if (page && !xa_is_value(page)) {
- sectors_missed += (PAGE_SIZE - offset_in_page(cur)) >>
+ folio = __filemap_get_folio(mapping, pg_index, 0, 0);
+ if (!IS_ERR(folio)) {
+ u64 folio_sz = folio_size(folio);
+ u64 offset = offset_in_folio(folio, cur);
+
+ folio_put(folio);
+ sectors_missed += (folio_sz - offset) >>
fs_info->sectorsize_bits;
/* Beyond threshold, no need to continue */
@@ -466,35 +470,35 @@ static noinline int add_ra_bio_pages(struct inode *inode,
* Jump to next page start as we already have page for
* current offset.
*/
- cur = (pg_index << PAGE_SHIFT) + PAGE_SIZE;
+ cur += (folio_sz - offset);
continue;
}
- page = __page_cache_alloc(mapping_gfp_constraint(mapping,
- ~__GFP_FS));
- if (!page)
+ folio = filemap_alloc_folio(mapping_gfp_constraint(mapping,
+ ~__GFP_FS), 0);
+ if (!folio)
break;
- if (add_to_page_cache_lru(page, mapping, pg_index, GFP_NOFS)) {
- put_page(page);
+ if (filemap_add_folio(mapping, folio, pg_index, GFP_NOFS)) {
/* There is already a page, skip to page end */
- cur = (pg_index << PAGE_SHIFT) + PAGE_SIZE;
+ cur += folio_size(folio);
+ folio_put(folio);
continue;
}
- if (!*memstall && PageWorkingset(page)) {
+ if (!*memstall && folio_test_workingset(folio)) {
psi_memstall_enter(pflags);
*memstall = 1;
}
- ret = set_page_extent_mapped(page);
+ ret = set_folio_extent_mapped(folio);
if (ret < 0) {
- unlock_page(page);
- put_page(page);
+ folio_unlock(folio);
+ folio_put(folio);
break;
}
- page_end = (pg_index << PAGE_SHIFT) + PAGE_SIZE - 1;
+ page_end = (pg_index << PAGE_SHIFT) + folio_size(folio) - 1;
lock_extent(tree, cur, page_end, NULL);
read_lock(&em_tree->lock);
em = lookup_extent_mapping(em_tree, cur, page_end + 1 - cur);
@@ -511,28 +515,28 @@ static noinline int add_ra_bio_pages(struct inode *inode,
orig_bio->bi_iter.bi_sector) {
free_extent_map(em);
unlock_extent(tree, cur, page_end, NULL);
- unlock_page(page);
- put_page(page);
+ folio_unlock(folio);
+ folio_put(folio);
break;
}
add_size = min(em->start + em->len, page_end + 1) - cur;
free_extent_map(em);
- if (page->index == end_index) {
- size_t zero_offset = offset_in_page(isize);
+ if (folio->index == end_index) {
+ size_t zero_offset = offset_in_folio(folio, isize);
if (zero_offset) {
int zeros;
- zeros = PAGE_SIZE - zero_offset;
- memzero_page(page, zero_offset, zeros);
+ zeros = folio_size(folio) - zero_offset;
+ folio_zero_range(folio, zero_offset, zeros);
}
}
- ret = bio_add_page(orig_bio, page, add_size, offset_in_page(cur));
- if (ret != add_size) {
+ if (!bio_add_folio(orig_bio, folio, add_size,
+ offset_in_folio(folio, cur))) {
unlock_extent(tree, cur, page_end, NULL);
- unlock_page(page);
- put_page(page);
+ folio_unlock(folio);
+ folio_put(folio);
break;
}
/*
@@ -541,9 +545,9 @@ static noinline int add_ra_bio_pages(struct inode *inode,
* subpage::readers and to unlock the page.
*/
if (fs_info->sectorsize < PAGE_SIZE)
- btrfs_subpage_start_reader(fs_info, page_folio(page),
- cur, add_size);
- put_page(page);
+ btrfs_subpage_start_reader(fs_info, folio, cur,
+ add_size);
+ folio_put(folio);
cur += add_size;
}
return 0;
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 12/46] btrfs: utilize folio more in btrfs_page_mkwrite
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (10 preceding siblings ...)
2024-07-26 19:35 ` [PATCH 11/46] btrfs: convert add_ra_bio_pages to use only folios Josef Bacik
@ 2024-07-26 19:35 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 13/46] btrfs: convert can_finish_ordered_extent to use a folio Josef Bacik
` (35 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:35 UTC (permalink / raw)
To: linux-btrfs, kernel-team
We already have a folio that we're using in btrfs_page_mkwrite, update
the rest of the function to use folio everywhere else. This will make
it easier on Willy when he drops page->index.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/file.c | 24 ++++++++++++------------
1 file changed, 12 insertions(+), 12 deletions(-)
diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index 21381de906f6..cac177c5622d 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -1901,8 +1901,8 @@ static vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf)
reserved_space = PAGE_SIZE;
sb_start_pagefault(inode->i_sb);
- page_start = page_offset(page);
- page_end = page_start + PAGE_SIZE - 1;
+ page_start = folio_pos(folio);
+ page_end = page_start + folio_size(folio) - 1;
end = page_end;
/*
@@ -1930,18 +1930,18 @@ static vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf)
ret = VM_FAULT_NOPAGE;
again:
down_read(&BTRFS_I(inode)->i_mmap_lock);
- lock_page(page);
+ folio_lock(folio);
size = i_size_read(inode);
- if ((page->mapping != inode->i_mapping) ||
+ if ((folio->mapping != inode->i_mapping) ||
(page_start >= size)) {
/* Page got truncated out from underneath us. */
goto out_unlock;
}
- wait_on_page_writeback(page);
+ folio_wait_writeback(folio);
lock_extent(io_tree, page_start, page_end, &cached_state);
- ret2 = set_page_extent_mapped(page);
+ ret2 = set_folio_extent_mapped(folio);
if (ret2 < 0) {
ret = vmf_error(ret2);
unlock_extent(io_tree, page_start, page_end, &cached_state);
@@ -1955,14 +1955,14 @@ static vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf)
ordered = btrfs_lookup_ordered_range(BTRFS_I(inode), page_start, PAGE_SIZE);
if (ordered) {
unlock_extent(io_tree, page_start, page_end, &cached_state);
- unlock_page(page);
+ folio_unlock(folio);
up_read(&BTRFS_I(inode)->i_mmap_lock);
btrfs_start_ordered_extent(ordered);
btrfs_put_ordered_extent(ordered);
goto again;
}
- if (page->index == ((size - 1) >> PAGE_SHIFT)) {
+ if (folio->index == ((size - 1) >> PAGE_SHIFT)) {
reserved_space = round_up(size - page_start, fs_info->sectorsize);
if (reserved_space < PAGE_SIZE) {
end = page_start + reserved_space - 1;
@@ -1992,13 +1992,13 @@ static vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf)
}
/* Page is wholly or partially inside EOF. */
- if (page_start + PAGE_SIZE > size)
- zero_start = offset_in_page(size);
+ if (page_start + folio_size(folio) > size)
+ zero_start = offset_in_folio(folio, size);
else
zero_start = PAGE_SIZE;
if (zero_start != PAGE_SIZE)
- memzero_page(page, zero_start, PAGE_SIZE - zero_start);
+ folio_zero_range(folio, zero_start, folio_size(folio) - zero_start);
btrfs_folio_clear_checked(fs_info, folio, page_start, PAGE_SIZE);
btrfs_folio_set_dirty(fs_info, folio, page_start, end + 1 - page_start);
@@ -2015,7 +2015,7 @@ static vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf)
return VM_FAULT_LOCKED;
out_unlock:
- unlock_page(page);
+ folio_unlock(folio);
up_read(&BTRFS_I(inode)->i_mmap_lock);
out:
btrfs_delalloc_release_extents(BTRFS_I(inode), PAGE_SIZE);
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 13/46] btrfs: convert can_finish_ordered_extent to use a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (11 preceding siblings ...)
2024-07-26 19:35 ` [PATCH 12/46] btrfs: utilize folio more in btrfs_page_mkwrite Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 14/46] btrfs: convert btrfs_finish_ordered_extent to take " Josef Bacik
` (34 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
Pass in a folio instead, and use a folio instead of a page.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/ordered-data.c | 21 +++++++++++----------
1 file changed, 11 insertions(+), 10 deletions(-)
diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c
index 82a68394a89c..760a37512c7e 100644
--- a/fs/btrfs/ordered-data.c
+++ b/fs/btrfs/ordered-data.c
@@ -332,7 +332,7 @@ static void finish_ordered_fn(struct btrfs_work *work)
}
static bool can_finish_ordered_extent(struct btrfs_ordered_extent *ordered,
- struct page *page, u64 file_offset,
+ struct folio *folio, u64 file_offset,
u64 len, bool uptodate)
{
struct btrfs_inode *inode = ordered->inode;
@@ -340,10 +340,10 @@ static bool can_finish_ordered_extent(struct btrfs_ordered_extent *ordered,
lockdep_assert_held(&inode->ordered_tree_lock);
- if (page) {
- ASSERT(page->mapping);
- ASSERT(page_offset(page) <= file_offset);
- ASSERT(file_offset + len <= page_offset(page) + PAGE_SIZE);
+ if (folio) {
+ ASSERT(folio->mapping);
+ ASSERT(folio_pos(folio) <= file_offset);
+ ASSERT(file_offset + len <= folio_pos(folio) + folio_size(folio));
/*
* Ordered (Private2) bit indicates whether we still have
@@ -351,10 +351,9 @@ static bool can_finish_ordered_extent(struct btrfs_ordered_extent *ordered,
*
* If there's no such bit, we need to skip to next range.
*/
- if (!btrfs_folio_test_ordered(fs_info, page_folio(page),
- file_offset, len))
+ if (!btrfs_folio_test_ordered(fs_info, folio, file_offset, len))
return false;
- btrfs_folio_clear_ordered(fs_info, page_folio(page), file_offset, len);
+ btrfs_folio_clear_ordered(fs_info, folio, file_offset, len);
}
/* Now we're fine to update the accounting. */
@@ -408,7 +407,8 @@ void btrfs_finish_ordered_extent(struct btrfs_ordered_extent *ordered,
trace_btrfs_finish_ordered_extent(inode, file_offset, len, uptodate);
spin_lock_irqsave(&inode->ordered_tree_lock, flags);
- ret = can_finish_ordered_extent(ordered, page, file_offset, len, uptodate);
+ ret = can_finish_ordered_extent(ordered, page_folio(page), file_offset,
+ len, uptodate);
spin_unlock_irqrestore(&inode->ordered_tree_lock, flags);
/*
@@ -524,7 +524,8 @@ void btrfs_mark_ordered_io_finished(struct btrfs_inode *inode,
ASSERT(end + 1 - cur < U32_MAX);
len = end + 1 - cur;
- if (can_finish_ordered_extent(entry, page, cur, len, uptodate)) {
+ if (can_finish_ordered_extent(entry, page_folio(page), cur, len,
+ uptodate)) {
spin_unlock_irqrestore(&inode->ordered_tree_lock, flags);
btrfs_queue_ordered_fn(entry);
spin_lock_irqsave(&inode->ordered_tree_lock, flags);
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 14/46] btrfs: convert btrfs_finish_ordered_extent to take a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (12 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 13/46] btrfs: convert can_finish_ordered_extent to use a folio Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 15/46] btrfs: convert btrfs_mark_ordered_io_finished " Josef Bacik
` (33 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
The callers and callee's of this now all use folios, update it to take a
folio as well.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/extent_io.c | 4 ++--
fs/btrfs/ordered-data.c | 6 +++---
fs/btrfs/ordered-data.h | 2 +-
3 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index da60ec1e866a..58ff09368eb9 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -472,8 +472,8 @@ static void end_bbio_data_write(struct btrfs_bio *bbio)
"incomplete page write with offset %zu and length %zu",
fi.offset, fi.length);
- btrfs_finish_ordered_extent(bbio->ordered,
- folio_page(folio, 0), start, len, !error);
+ btrfs_finish_ordered_extent(bbio->ordered, folio, start, len,
+ !error);
if (error)
mapping_set_error(folio->mapping, error);
btrfs_folio_clear_writeback(fs_info, folio, start, len);
diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c
index 760a37512c7e..e97747956040 100644
--- a/fs/btrfs/ordered-data.c
+++ b/fs/btrfs/ordered-data.c
@@ -397,7 +397,7 @@ static void btrfs_queue_ordered_fn(struct btrfs_ordered_extent *ordered)
}
void btrfs_finish_ordered_extent(struct btrfs_ordered_extent *ordered,
- struct page *page, u64 file_offset, u64 len,
+ struct folio *folio, u64 file_offset, u64 len,
bool uptodate)
{
struct btrfs_inode *inode = ordered->inode;
@@ -407,8 +407,8 @@ void btrfs_finish_ordered_extent(struct btrfs_ordered_extent *ordered,
trace_btrfs_finish_ordered_extent(inode, file_offset, len, uptodate);
spin_lock_irqsave(&inode->ordered_tree_lock, flags);
- ret = can_finish_ordered_extent(ordered, page_folio(page), file_offset,
- len, uptodate);
+ ret = can_finish_ordered_extent(ordered, folio, file_offset, len,
+ uptodate);
spin_unlock_irqrestore(&inode->ordered_tree_lock, flags);
/*
diff --git a/fs/btrfs/ordered-data.h b/fs/btrfs/ordered-data.h
index 51b9e81726e2..90c1c3c51ae5 100644
--- a/fs/btrfs/ordered-data.h
+++ b/fs/btrfs/ordered-data.h
@@ -163,7 +163,7 @@ void btrfs_put_ordered_extent(struct btrfs_ordered_extent *entry);
void btrfs_remove_ordered_extent(struct btrfs_inode *btrfs_inode,
struct btrfs_ordered_extent *entry);
void btrfs_finish_ordered_extent(struct btrfs_ordered_extent *ordered,
- struct page *page, u64 file_offset, u64 len,
+ struct folio *folio, u64 file_offset, u64 len,
bool uptodate);
void btrfs_mark_ordered_io_finished(struct btrfs_inode *inode,
struct page *page, u64 file_offset,
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 15/46] btrfs: convert btrfs_mark_ordered_io_finished to take a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (13 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 14/46] btrfs: convert btrfs_finish_ordered_extent to take " Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 16/46] btrfs: convert writepage_delalloc " Josef Bacik
` (32 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
We only need a folio now, make it take a folio as an argument and update
all of the callers.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/extent_io.c | 8 ++++----
fs/btrfs/inode.c | 7 ++++---
fs/btrfs/ordered-data.c | 9 ++++-----
fs/btrfs/ordered-data.h | 4 ++--
4 files changed, 14 insertions(+), 14 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 58ff09368eb9..56bf87ac5f6c 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -1428,8 +1428,8 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode,
u32 iosize;
if (cur >= i_size) {
- btrfs_mark_ordered_io_finished(inode, &folio->page, cur,
- len, true);
+ btrfs_mark_ordered_io_finished(inode, folio, cur, len,
+ true);
/*
* This range is beyond i_size, thus we don't need to
* bother writing back.
@@ -1569,7 +1569,7 @@ static int __extent_writepage(struct folio *folio, struct btrfs_bio_ctrl *bio_ct
folio_end_writeback(folio);
}
if (ret) {
- btrfs_mark_ordered_io_finished(BTRFS_I(inode), &folio->page,
+ btrfs_mark_ordered_io_finished(BTRFS_I(inode), folio,
page_start, PAGE_SIZE, !ret);
mapping_set_error(folio->mapping, ret);
}
@@ -2331,7 +2331,7 @@ void extent_write_locked_range(struct inode *inode, const struct page *locked_pa
btrfs_folio_clear_writeback(fs_info, folio, cur, cur_len);
}
if (ret) {
- btrfs_mark_ordered_io_finished(BTRFS_I(inode), &folio->page,
+ btrfs_mark_ordered_io_finished(BTRFS_I(inode), folio,
cur, cur_len, !ret);
mapping_set_error(mapping, ret);
}
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 26dc2c3ac903..a8744d2c6a97 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -1144,7 +1144,8 @@ static void submit_uncompressed_range(struct btrfs_inode *inode,
set_page_writeback(locked_page);
end_page_writeback(locked_page);
- btrfs_mark_ordered_io_finished(inode, locked_page,
+ btrfs_mark_ordered_io_finished(inode,
+ page_folio(locked_page),
page_start, PAGE_SIZE,
!ret);
mapping_set_error(locked_page->mapping, ret);
@@ -2802,8 +2803,8 @@ static void btrfs_writepage_fixup_worker(struct btrfs_work *work)
* to reflect the errors and clean the page.
*/
mapping_set_error(page->mapping, ret);
- btrfs_mark_ordered_io_finished(inode, page, page_start,
- PAGE_SIZE, !ret);
+ btrfs_mark_ordered_io_finished(inode, page_folio(page),
+ page_start, PAGE_SIZE, !ret);
clear_page_dirty_for_io(page);
}
btrfs_folio_clear_checked(fs_info, page_folio(page), page_start, PAGE_SIZE);
diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c
index e97747956040..eb9b32ffbc0c 100644
--- a/fs/btrfs/ordered-data.c
+++ b/fs/btrfs/ordered-data.c
@@ -449,8 +449,8 @@ void btrfs_finish_ordered_extent(struct btrfs_ordered_extent *ordered,
/*
* Mark all ordered extents io inside the specified range finished.
*
- * @page: The involved page for the operation.
- * For uncompressed buffered IO, the page status also needs to be
+ * @folio: The involved folio for the operation.
+ * For uncompressed buffered IO, the folio status also needs to be
* updated to indicate whether the pending ordered io is finished.
* Can be NULL for direct IO and compressed write.
* For these cases, callers are ensured they won't execute the
@@ -460,7 +460,7 @@ void btrfs_finish_ordered_extent(struct btrfs_ordered_extent *ordered,
* extent(s) covering it.
*/
void btrfs_mark_ordered_io_finished(struct btrfs_inode *inode,
- struct page *page, u64 file_offset,
+ struct folio *folio, u64 file_offset,
u64 num_bytes, bool uptodate)
{
struct rb_node *node;
@@ -524,8 +524,7 @@ void btrfs_mark_ordered_io_finished(struct btrfs_inode *inode,
ASSERT(end + 1 - cur < U32_MAX);
len = end + 1 - cur;
- if (can_finish_ordered_extent(entry, page_folio(page), cur, len,
- uptodate)) {
+ if (can_finish_ordered_extent(entry, folio, cur, len, uptodate)) {
spin_unlock_irqrestore(&inode->ordered_tree_lock, flags);
btrfs_queue_ordered_fn(entry);
spin_lock_irqsave(&inode->ordered_tree_lock, flags);
diff --git a/fs/btrfs/ordered-data.h b/fs/btrfs/ordered-data.h
index 90c1c3c51ae5..4e152736d06c 100644
--- a/fs/btrfs/ordered-data.h
+++ b/fs/btrfs/ordered-data.h
@@ -166,8 +166,8 @@ void btrfs_finish_ordered_extent(struct btrfs_ordered_extent *ordered,
struct folio *folio, u64 file_offset, u64 len,
bool uptodate);
void btrfs_mark_ordered_io_finished(struct btrfs_inode *inode,
- struct page *page, u64 file_offset,
- u64 num_bytes, bool uptodate);
+ struct folio *folio, u64 file_offset,
+ u64 num_bytes, bool uptodate);
bool btrfs_dec_test_ordered_pending(struct btrfs_inode *inode,
struct btrfs_ordered_extent **cached,
u64 file_offset, u64 io_size);
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 16/46] btrfs: convert writepage_delalloc to take a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (14 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 15/46] btrfs: convert btrfs_mark_ordered_io_finished " Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 17/46] btrfs: convert find_lock_delalloc_range to use " Josef Bacik
` (31 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
We already use a folio heavily in this function, pass the folio in
directly and use it everywhere, only passing the page down to functions
that do not take a folio yet.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/extent_io.c | 38 ++++++++++++++++++++------------------
1 file changed, 20 insertions(+), 18 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 56bf87ac5f6c..382558fe1032 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -1188,13 +1188,13 @@ int btrfs_read_folio(struct file *file, struct folio *folio)
* This returns < 0 if there were errors (page still locked)
*/
static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
- struct page *page, struct writeback_control *wbc)
+ struct folio *folio,
+ struct writeback_control *wbc)
{
struct btrfs_fs_info *fs_info = inode_to_fs_info(&inode->vfs_inode);
- struct folio *folio = page_folio(page);
- const bool is_subpage = btrfs_is_subpage(fs_info, page->mapping);
- const u64 page_start = page_offset(page);
- const u64 page_end = page_start + PAGE_SIZE - 1;
+ const bool is_subpage = btrfs_is_subpage(fs_info, folio->mapping);
+ const u64 page_start = folio_pos(folio);
+ const u64 page_end = page_start + folio_size(folio) - 1;
/*
* Save the last found delalloc end. As the delalloc end can go beyond
* page boundary, thus we cannot rely on subpage bitmap to locate the
@@ -1206,10 +1206,10 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
u64 delalloc_to_write = 0;
int ret = 0;
- /* Lock all (subpage) delalloc ranges inside the page first. */
+ /* Lock all (subpage) delalloc ranges inside the folio first. */
while (delalloc_start < page_end) {
delalloc_end = page_end;
- if (!find_lock_delalloc_range(&inode->vfs_inode, page,
+ if (!find_lock_delalloc_range(&inode->vfs_inode, &folio->page,
&delalloc_start, &delalloc_end)) {
delalloc_start = delalloc_end + 1;
continue;
@@ -1234,7 +1234,7 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
if (!is_subpage) {
/*
* For non-subpage case, the found delalloc range must
- * cover this page and there must be only one locked
+ * cover this folio and there must be only one locked
* delalloc range.
*/
found_start = page_start;
@@ -1248,7 +1248,7 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
break;
/*
* The subpage range covers the last sector, the delalloc range may
- * end beyond the page boundary, use the saved delalloc_end
+ * end beyond the folio boundary, use the saved delalloc_end
* instead.
*/
if (found_start + found_len >= page_end)
@@ -1256,7 +1256,8 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
if (ret >= 0) {
/* No errors hit so far, run the current delalloc range. */
- ret = btrfs_run_delalloc_range(inode, page, found_start,
+ ret = btrfs_run_delalloc_range(inode, &folio->page,
+ found_start,
found_start + found_len - 1,
wbc);
} else {
@@ -1266,15 +1267,16 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
*/
unlock_extent(&inode->io_tree, found_start,
found_start + found_len - 1, NULL);
- __unlock_for_delalloc(&inode->vfs_inode, page, found_start,
+ __unlock_for_delalloc(&inode->vfs_inode, &folio->page,
+ found_start,
found_start + found_len - 1);
}
/*
* We can hit btrfs_run_delalloc_range() with >0 return value.
*
- * This happens when either the IO is already done and page
- * unlocked (inline) or the IO submission and page unlock would
+ * This happens when either the IO is already done and folio
+ * unlocked (inline) or the IO submission and folio unlock would
* be handled as async (compression).
*
* Inline is only possible for regular sectorsize for now.
@@ -1282,14 +1284,14 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
* Compression is possible for both subpage and regular cases,
* but even for subpage compression only happens for page aligned
* range, thus the found delalloc range must go beyond current
- * page.
+ * folio.
*/
if (ret > 0)
ASSERT(!is_subpage || found_start + found_len >= page_end);
/*
- * Above btrfs_run_delalloc_range() may have unlocked the page,
- * thus for the last range, we cannot touch the page anymore.
+ * Above btrfs_run_delalloc_range() may have unlocked the folio,
+ * thus for the last range, we cannot touch the folio anymore.
*/
if (found_start + found_len >= last_delalloc_end + 1)
break;
@@ -1312,7 +1314,7 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
/*
* If btrfs_run_dealloc_range() already started I/O and unlocked
- * the pages, we just need to account for them here.
+ * the folios, we just need to account for them here.
*/
if (ret == 1) {
wbc->nr_to_write -= delalloc_to_write;
@@ -1549,7 +1551,7 @@ static int __extent_writepage(struct folio *folio, struct btrfs_bio_ctrl *bio_ct
if (ret < 0)
goto done;
- ret = writepage_delalloc(BTRFS_I(inode), &folio->page, bio_ctrl->wbc);
+ ret = writepage_delalloc(BTRFS_I(inode), folio, bio_ctrl->wbc);
if (ret == 1)
return 0;
if (ret)
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 17/46] btrfs: convert find_lock_delalloc_range to use a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (15 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 16/46] btrfs: convert writepage_delalloc " Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 18/46] btrfs: convert lock_delalloc_pages to take " Josef Bacik
` (30 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
Instead of passing in a page for locked_page, pass in the folio instead.
We only use the folio itself to validate some range assumptions, and
then pass it into other functions.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/extent_io.c | 28 ++++++++++++++--------------
fs/btrfs/extent_io.h | 2 +-
fs/btrfs/tests/extent-io-tests.c | 10 +++++-----
3 files changed, 20 insertions(+), 20 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 382558fe1032..def12bb8b34d 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -304,8 +304,8 @@ static noinline int lock_delalloc_pages(struct inode *inode,
*/
EXPORT_FOR_TESTS
noinline_for_stack bool find_lock_delalloc_range(struct inode *inode,
- struct page *locked_page, u64 *start,
- u64 *end)
+ struct folio *locked_folio,
+ u64 *start, u64 *end)
{
struct btrfs_fs_info *fs_info = inode_to_fs_info(inode);
struct extent_io_tree *tree = &BTRFS_I(inode)->io_tree;
@@ -323,9 +323,9 @@ noinline_for_stack bool find_lock_delalloc_range(struct inode *inode,
/* Caller should pass a valid @end to indicate the search range end */
ASSERT(orig_end > orig_start);
- /* The range should at least cover part of the page */
- ASSERT(!(orig_start >= page_offset(locked_page) + PAGE_SIZE ||
- orig_end <= page_offset(locked_page)));
+ /* The range should at least cover part of the folio */
+ ASSERT(!(orig_start >= folio_pos(locked_folio) + folio_size(locked_folio) ||
+ orig_end <= folio_pos(locked_folio)));
again:
/* step one, find a bunch of delalloc bytes starting at start */
delalloc_start = *start;
@@ -342,25 +342,25 @@ noinline_for_stack bool find_lock_delalloc_range(struct inode *inode,
}
/*
- * start comes from the offset of locked_page. We have to lock
- * pages in order, so we can't process delalloc bytes before
- * locked_page
+ * start comes from the offset of locked_folio. We have to lock
+ * folios in order, so we can't process delalloc bytes before
+ * locked_folio
*/
if (delalloc_start < *start)
delalloc_start = *start;
/*
- * make sure to limit the number of pages we try to lock down
+ * make sure to limit the number of folios we try to lock down
*/
if (delalloc_end + 1 - delalloc_start > max_bytes)
delalloc_end = delalloc_start + max_bytes - 1;
- /* step two, lock all the pages after the page that has start */
- ret = lock_delalloc_pages(inode, locked_page,
+ /* step two, lock all the folioss after the folios that has start */
+ ret = lock_delalloc_pages(inode, &locked_folio->page,
delalloc_start, delalloc_end);
ASSERT(!ret || ret == -EAGAIN);
if (ret == -EAGAIN) {
- /* some of the pages are gone, lets avoid looping by
+ /* some of the folios are gone, lets avoid looping by
* shortening the size of the delalloc range we're searching
*/
free_extent_state(cached_state);
@@ -384,7 +384,7 @@ noinline_for_stack bool find_lock_delalloc_range(struct inode *inode,
unlock_extent(tree, delalloc_start, delalloc_end, &cached_state);
if (!ret) {
- __unlock_for_delalloc(inode, locked_page,
+ __unlock_for_delalloc(inode, &locked_folio->page,
delalloc_start, delalloc_end);
cond_resched();
goto again;
@@ -1209,7 +1209,7 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
/* Lock all (subpage) delalloc ranges inside the folio first. */
while (delalloc_start < page_end) {
delalloc_end = page_end;
- if (!find_lock_delalloc_range(&inode->vfs_inode, &folio->page,
+ if (!find_lock_delalloc_range(&inode->vfs_inode, folio,
&delalloc_start, &delalloc_end)) {
delalloc_start = delalloc_end + 1;
continue;
diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h
index dceebd76c7d1..1dd295e1b5a5 100644
--- a/fs/btrfs/extent_io.h
+++ b/fs/btrfs/extent_io.h
@@ -368,7 +368,7 @@ int btrfs_alloc_folio_array(unsigned int nr_folios, struct folio **folio_array);
#ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS
bool find_lock_delalloc_range(struct inode *inode,
- struct page *locked_page, u64 *start,
+ struct folio *locked_folio, u64 *start,
u64 *end);
#endif
struct extent_buffer *alloc_test_extent_buffer(struct btrfs_fs_info *fs_info,
diff --git a/fs/btrfs/tests/extent-io-tests.c b/fs/btrfs/tests/extent-io-tests.c
index 865d4af4b303..0a2dbfaaf49e 100644
--- a/fs/btrfs/tests/extent-io-tests.c
+++ b/fs/btrfs/tests/extent-io-tests.c
@@ -180,7 +180,7 @@ static int test_find_delalloc(u32 sectorsize, u32 nodesize)
set_extent_bit(tmp, 0, sectorsize - 1, EXTENT_DELALLOC, NULL);
start = 0;
end = start + PAGE_SIZE - 1;
- found = find_lock_delalloc_range(inode, locked_page, &start,
+ found = find_lock_delalloc_range(inode, page_folio(locked_page), &start,
&end);
if (!found) {
test_err("should have found at least one delalloc");
@@ -211,7 +211,7 @@ static int test_find_delalloc(u32 sectorsize, u32 nodesize)
set_extent_bit(tmp, sectorsize, max_bytes - 1, EXTENT_DELALLOC, NULL);
start = test_start;
end = start + PAGE_SIZE - 1;
- found = find_lock_delalloc_range(inode, locked_page, &start,
+ found = find_lock_delalloc_range(inode, page_folio(locked_page), &start,
&end);
if (!found) {
test_err("couldn't find delalloc in our range");
@@ -245,7 +245,7 @@ static int test_find_delalloc(u32 sectorsize, u32 nodesize)
}
start = test_start;
end = start + PAGE_SIZE - 1;
- found = find_lock_delalloc_range(inode, locked_page, &start,
+ found = find_lock_delalloc_range(inode, page_folio(locked_page), &start,
&end);
if (found) {
test_err("found range when we shouldn't have");
@@ -266,7 +266,7 @@ static int test_find_delalloc(u32 sectorsize, u32 nodesize)
set_extent_bit(tmp, max_bytes, total_dirty - 1, EXTENT_DELALLOC, NULL);
start = test_start;
end = start + PAGE_SIZE - 1;
- found = find_lock_delalloc_range(inode, locked_page, &start,
+ found = find_lock_delalloc_range(inode, page_folio(locked_page), &start,
&end);
if (!found) {
test_err("didn't find our range");
@@ -307,7 +307,7 @@ static int test_find_delalloc(u32 sectorsize, u32 nodesize)
* this changes at any point in the future we will need to fix this
* tests expected behavior.
*/
- found = find_lock_delalloc_range(inode, locked_page, &start,
+ found = find_lock_delalloc_range(inode, page_folio(locked_page), &start,
&end);
if (!found) {
test_err("didn't find our range");
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 18/46] btrfs: convert lock_delalloc_pages to take a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (16 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 17/46] btrfs: convert find_lock_delalloc_range to use " Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 19/46] btrfs: convert __unlock_for_delalloc " Josef Bacik
` (29 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
Also rename lock_delalloc_pages => lock_delalloc_folios in the process,
now that it exclusively works on folios.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/extent_io.c | 23 +++++++++++------------
1 file changed, 11 insertions(+), 12 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index def12bb8b34d..33c45b6e8969 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -230,10 +230,9 @@ static noinline void __unlock_for_delalloc(const struct inode *inode,
PAGE_UNLOCK);
}
-static noinline int lock_delalloc_pages(struct inode *inode,
- const struct page *locked_page,
- u64 start,
- u64 end)
+static noinline int lock_delalloc_folios(struct inode *inode,
+ const struct folio *locked_folio,
+ u64 start, u64 end)
{
struct btrfs_fs_info *fs_info = inode_to_fs_info(inode);
struct address_space *mapping = inode->i_mapping;
@@ -243,7 +242,7 @@ static noinline int lock_delalloc_pages(struct inode *inode,
u64 processed_end = start;
struct folio_batch fbatch;
- if (index == locked_page->index && index == end_index)
+ if (index == locked_folio->index && index == end_index)
return 0;
folio_batch_init(&fbatch);
@@ -257,23 +256,22 @@ static noinline int lock_delalloc_pages(struct inode *inode,
for (i = 0; i < found_folios; i++) {
struct folio *folio = fbatch.folios[i];
- struct page *page = folio_page(folio, 0);
u32 len = end + 1 - start;
- if (page == locked_page)
+ if (folio == locked_folio)
continue;
if (btrfs_folio_start_writer_lock(fs_info, folio, start,
len))
goto out;
- if (!PageDirty(page) || page->mapping != mapping) {
+ if (!folio_test_dirty(folio) || folio->mapping != mapping) {
btrfs_folio_end_writer_lock(fs_info, folio, start,
len);
goto out;
}
- processed_end = page_offset(page) + PAGE_SIZE - 1;
+ processed_end = folio_pos(folio) + folio_size(folio) - 1;
}
folio_batch_release(&fbatch);
cond_resched();
@@ -283,7 +281,8 @@ static noinline int lock_delalloc_pages(struct inode *inode,
out:
folio_batch_release(&fbatch);
if (processed_end > start)
- __unlock_for_delalloc(inode, locked_page, start, processed_end);
+ __unlock_for_delalloc(inode, &locked_folio->page, start,
+ processed_end);
return -EAGAIN;
}
@@ -356,8 +355,8 @@ noinline_for_stack bool find_lock_delalloc_range(struct inode *inode,
delalloc_end = delalloc_start + max_bytes - 1;
/* step two, lock all the folioss after the folios that has start */
- ret = lock_delalloc_pages(inode, &locked_folio->page,
- delalloc_start, delalloc_end);
+ ret = lock_delalloc_folios(inode, locked_folio, delalloc_start,
+ delalloc_end);
ASSERT(!ret || ret == -EAGAIN);
if (ret == -EAGAIN) {
/* some of the folios are gone, lets avoid looping by
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 19/46] btrfs: convert __unlock_for_delalloc to take a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (17 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 18/46] btrfs: convert lock_delalloc_pages to take " Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 20/46] btrfs: convert __process_pages_contig " Josef Bacik
` (28 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
All of the callers have a folio at this point, update
__unlock_for_delalloc to take a folio so that it's consistent with its
callers.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/extent_io.c | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 33c45b6e8969..46d26f54e9d4 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -216,18 +216,18 @@ static void __process_pages_contig(struct address_space *mapping,
}
static noinline void __unlock_for_delalloc(const struct inode *inode,
- const struct page *locked_page,
+ const struct folio *locked_folio,
u64 start, u64 end)
{
unsigned long index = start >> PAGE_SHIFT;
unsigned long end_index = end >> PAGE_SHIFT;
- ASSERT(locked_page);
- if (index == locked_page->index && end_index == index)
+ ASSERT(locked_folio);
+ if (index == locked_folio->index && end_index == index)
return;
- __process_pages_contig(inode->i_mapping, locked_page, start, end,
- PAGE_UNLOCK);
+ __process_pages_contig(inode->i_mapping, &locked_folio->page, start,
+ end, PAGE_UNLOCK);
}
static noinline int lock_delalloc_folios(struct inode *inode,
@@ -281,7 +281,7 @@ static noinline int lock_delalloc_folios(struct inode *inode,
out:
folio_batch_release(&fbatch);
if (processed_end > start)
- __unlock_for_delalloc(inode, &locked_folio->page, start,
+ __unlock_for_delalloc(inode, locked_folio, start,
processed_end);
return -EAGAIN;
}
@@ -383,8 +383,8 @@ noinline_for_stack bool find_lock_delalloc_range(struct inode *inode,
unlock_extent(tree, delalloc_start, delalloc_end, &cached_state);
if (!ret) {
- __unlock_for_delalloc(inode, &locked_folio->page,
- delalloc_start, delalloc_end);
+ __unlock_for_delalloc(inode, locked_folio, delalloc_start,
+ delalloc_end);
cond_resched();
goto again;
}
@@ -1266,7 +1266,7 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
*/
unlock_extent(&inode->io_tree, found_start,
found_start + found_len - 1, NULL);
- __unlock_for_delalloc(&inode->vfs_inode, &folio->page,
+ __unlock_for_delalloc(&inode->vfs_inode, folio,
found_start,
found_start + found_len - 1);
}
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 20/46] btrfs: convert __process_pages_contig to take a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (18 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 19/46] btrfs: convert __unlock_for_delalloc " Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 21/46] btrfs: convert process_one_page to operate only on folios Josef Bacik
` (27 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
This operates mostly on folios, update it to take a folio for the locked
folio instead of the page, rename from __process_pages_contig =>
__process_folios_contig.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/extent_io.c | 19 ++++++++++---------
1 file changed, 10 insertions(+), 9 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 46d26f54e9d4..d49f3adf7d86 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -187,9 +187,9 @@ static void process_one_page(struct btrfs_fs_info *fs_info,
btrfs_folio_end_writer_lock(fs_info, folio, start, len);
}
-static void __process_pages_contig(struct address_space *mapping,
- const struct page *locked_page, u64 start, u64 end,
- unsigned long page_ops)
+static void __process_folios_contig(struct address_space *mapping,
+ const struct folio *locked_folio, u64 start,
+ u64 end, unsigned long page_ops)
{
struct btrfs_fs_info *fs_info = inode_to_fs_info(mapping->host);
pgoff_t start_index = start >> PAGE_SHIFT;
@@ -207,8 +207,9 @@ static void __process_pages_contig(struct address_space *mapping,
for (i = 0; i < found_folios; i++) {
struct folio *folio = fbatch.folios[i];
- process_one_page(fs_info, &folio->page, locked_page,
- page_ops, start, end);
+ process_one_page(fs_info, &folio->page,
+ &locked_folio->page, page_ops, start,
+ end);
}
folio_batch_release(&fbatch);
cond_resched();
@@ -226,8 +227,8 @@ static noinline void __unlock_for_delalloc(const struct inode *inode,
if (index == locked_folio->index && end_index == index)
return;
- __process_pages_contig(inode->i_mapping, &locked_folio->page, start,
- end, PAGE_UNLOCK);
+ __process_folios_contig(inode->i_mapping, locked_folio, start, end,
+ PAGE_UNLOCK);
}
static noinline int lock_delalloc_folios(struct inode *inode,
@@ -401,8 +402,8 @@ void extent_clear_unlock_delalloc(struct btrfs_inode *inode, u64 start, u64 end,
{
clear_extent_bit(&inode->io_tree, start, end, clear_bits, cached);
- __process_pages_contig(inode->vfs_inode.i_mapping, locked_page,
- start, end, page_ops);
+ __process_folios_contig(inode->vfs_inode.i_mapping,
+ page_folio(locked_page), start, end, page_ops);
}
static bool btrfs_verify_folio(struct folio *folio, u64 start, u32 len)
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 21/46] btrfs: convert process_one_page to operate only on folios
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (19 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 20/46] btrfs: convert __process_pages_contig " Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 22/46] btrfs: convert extent_clear_unlock_delalloc to take a folio Josef Bacik
` (26 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
Now that this mostly uses folios, update it to take folios, use the
folios that are passed in, and rename from process_one_page =>
process_one_folio.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/extent_io.c | 14 ++++++--------
1 file changed, 6 insertions(+), 8 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index d49f3adf7d86..b944dcd9e941 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -164,11 +164,10 @@ void __cold extent_buffer_free_cachep(void)
kmem_cache_destroy(extent_buffer_cache);
}
-static void process_one_page(struct btrfs_fs_info *fs_info,
- struct page *page, const struct page *locked_page,
- unsigned long page_ops, u64 start, u64 end)
+static void process_one_folio(struct btrfs_fs_info *fs_info,
+ struct folio *folio, const struct folio *locked_folio,
+ unsigned long page_ops, u64 start, u64 end)
{
- struct folio *folio = page_folio(page);
u32 len;
ASSERT(end + 1 - start != 0 && end + 1 - start < U32_MAX);
@@ -183,7 +182,7 @@ static void process_one_page(struct btrfs_fs_info *fs_info,
if (page_ops & PAGE_END_WRITEBACK)
btrfs_folio_clamp_clear_writeback(fs_info, folio, start, len);
- if (page != locked_page && (page_ops & PAGE_UNLOCK))
+ if (folio != locked_folio && (page_ops & PAGE_UNLOCK))
btrfs_folio_end_writer_lock(fs_info, folio, start, len);
}
@@ -207,9 +206,8 @@ static void __process_folios_contig(struct address_space *mapping,
for (i = 0; i < found_folios; i++) {
struct folio *folio = fbatch.folios[i];
- process_one_page(fs_info, &folio->page,
- &locked_folio->page, page_ops, start,
- end);
+ process_one_folio(fs_info, folio, locked_folio,
+ page_ops, start, end);
}
folio_batch_release(&fbatch);
cond_resched();
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 22/46] btrfs: convert extent_clear_unlock_delalloc to take a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (20 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 21/46] btrfs: convert process_one_page to operate only on folios Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 23/46] btrfs: convert extent_write_locked_range " Josef Bacik
` (25 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
Instead of taking the locked page, take the locked folio so we can pass
that into __process_folios_contig.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/extent_io.c | 6 +++---
fs/btrfs/extent_io.h | 2 +-
fs/btrfs/inode.c | 25 ++++++++++++++-----------
3 files changed, 18 insertions(+), 15 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index b944dcd9e941..6036fd6b9b79 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -394,14 +394,14 @@ noinline_for_stack bool find_lock_delalloc_range(struct inode *inode,
}
void extent_clear_unlock_delalloc(struct btrfs_inode *inode, u64 start, u64 end,
- const struct page *locked_page,
+ const struct folio *locked_folio,
struct extent_state **cached,
u32 clear_bits, unsigned long page_ops)
{
clear_extent_bit(&inode->io_tree, start, end, clear_bits, cached);
- __process_folios_contig(inode->vfs_inode.i_mapping,
- page_folio(locked_page), start, end, page_ops);
+ __process_folios_contig(inode->vfs_inode.i_mapping, locked_folio, start,
+ end, page_ops);
}
static bool btrfs_verify_folio(struct folio *folio, u64 start, u32 len)
diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h
index 1dd295e1b5a5..5d36031578ff 100644
--- a/fs/btrfs/extent_io.h
+++ b/fs/btrfs/extent_io.h
@@ -354,7 +354,7 @@ void set_extent_buffer_dirty(struct extent_buffer *eb);
void set_extent_buffer_uptodate(struct extent_buffer *eb);
void clear_extent_buffer_uptodate(struct extent_buffer *eb);
void extent_clear_unlock_delalloc(struct btrfs_inode *inode, u64 start, u64 end,
- const struct page *locked_page,
+ const struct folio *locked_folio,
struct extent_state **cached,
u32 bits_to_clear, unsigned long page_ops);
int extent_invalidate_folio(struct extent_io_tree *tree,
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index a8744d2c6a97..199f783680e2 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -743,10 +743,10 @@ static noinline int cow_file_range_inline(struct btrfs_inode *inode,
if (ret == 0)
locked_page = NULL;
- extent_clear_unlock_delalloc(inode, offset, end, locked_page, &cached,
- clear_flags,
- PAGE_UNLOCK | PAGE_START_WRITEBACK |
- PAGE_END_WRITEBACK);
+ extent_clear_unlock_delalloc(inode, offset, end,
+ page_folio(locked_page), &cached,
+ clear_flags, PAGE_UNLOCK |
+ PAGE_START_WRITEBACK | PAGE_END_WRITEBACK);
return ret;
}
@@ -1501,7 +1501,7 @@ static noinline int cow_file_range(struct btrfs_inode *inode,
page_ops |= PAGE_SET_ORDERED;
extent_clear_unlock_delalloc(inode, start, start + ram_size - 1,
- locked_page, &cached,
+ page_folio(locked_page), &cached,
EXTENT_LOCKED | EXTENT_DELALLOC,
page_ops);
if (num_bytes < cur_alloc_size)
@@ -1560,7 +1560,8 @@ static noinline int cow_file_range(struct btrfs_inode *inode,
if (!locked_page)
mapping_set_error(inode->vfs_inode.i_mapping, ret);
extent_clear_unlock_delalloc(inode, orig_start, start - 1,
- locked_page, NULL, 0, page_ops);
+ page_folio(locked_page), NULL, 0,
+ page_ops);
}
/*
@@ -1583,7 +1584,7 @@ static noinline int cow_file_range(struct btrfs_inode *inode,
if (extent_reserved) {
extent_clear_unlock_delalloc(inode, start,
start + cur_alloc_size - 1,
- locked_page, &cached,
+ page_folio(locked_page), &cached,
clear_bits,
page_ops);
btrfs_qgroup_free_data(inode, NULL, start, cur_alloc_size, NULL);
@@ -1598,8 +1599,9 @@ static noinline int cow_file_range(struct btrfs_inode *inode,
*/
if (start < end) {
clear_bits |= EXTENT_CLEAR_DATA_RESV;
- extent_clear_unlock_delalloc(inode, start, end, locked_page,
- &cached, clear_bits, page_ops);
+ extent_clear_unlock_delalloc(inode, start, end,
+ page_folio(locked_page), &cached,
+ clear_bits, page_ops);
btrfs_qgroup_free_data(inode, NULL, start, cur_alloc_size, NULL);
}
return ret;
@@ -2207,7 +2209,8 @@ static noinline int run_delalloc_nocow(struct btrfs_inode *inode,
btrfs_put_ordered_extent(ordered);
extent_clear_unlock_delalloc(inode, cur_offset, nocow_end,
- locked_page, &cached_state,
+ page_folio(locked_page),
+ &cached_state,
EXTENT_LOCKED | EXTENT_DELALLOC |
EXTENT_CLEAR_DATA_RESV,
PAGE_UNLOCK | PAGE_SET_ORDERED);
@@ -2256,7 +2259,7 @@ static noinline int run_delalloc_nocow(struct btrfs_inode *inode,
lock_extent(&inode->io_tree, cur_offset, end, &cached);
extent_clear_unlock_delalloc(inode, cur_offset, end,
- locked_page, &cached,
+ page_folio(locked_page), &cached,
EXTENT_LOCKED | EXTENT_DELALLOC |
EXTENT_DEFRAG |
EXTENT_DO_ACCOUNTING, PAGE_UNLOCK |
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 23/46] btrfs: convert extent_write_locked_range to take a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (21 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 22/46] btrfs: convert extent_clear_unlock_delalloc to take a folio Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 24/46] btrfs: convert run_delalloc_cow " Josef Bacik
` (24 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
This mostly uses folios, convert it to take a folio instead and update
the callers to pass in the folio.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/extent_io.c | 4 ++--
fs/btrfs/extent_io.h | 2 +-
fs/btrfs/inode.c | 3 ++-
3 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 6036fd6b9b79..1faadf903e00 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -2275,7 +2275,7 @@ static int extent_write_cache_pages(struct address_space *mapping,
* already been ran (aka, ordered extent inserted) and all pages are still
* locked.
*/
-void extent_write_locked_range(struct inode *inode, const struct page *locked_page,
+void extent_write_locked_range(struct inode *inode, const struct folio *locked_folio,
u64 start, u64 end, struct writeback_control *wbc,
bool pages_dirty)
{
@@ -2317,7 +2317,7 @@ void extent_write_locked_range(struct inode *inode, const struct page *locked_pa
}
ASSERT(folio_test_locked(folio));
- if (pages_dirty && &folio->page != locked_page)
+ if (pages_dirty && folio != locked_folio)
ASSERT(folio_test_dirty(folio));
ret = __extent_writepage_io(BTRFS_I(inode), folio, cur, cur_len,
diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h
index 5d36031578ff..b38460279b99 100644
--- a/fs/btrfs/extent_io.h
+++ b/fs/btrfs/extent_io.h
@@ -240,7 +240,7 @@ bool try_release_extent_mapping(struct page *page, gfp_t mask);
int try_release_extent_buffer(struct page *page);
int btrfs_read_folio(struct file *file, struct folio *folio);
-void extent_write_locked_range(struct inode *inode, const struct page *locked_page,
+void extent_write_locked_range(struct inode *inode, const struct folio *locked_folio,
u64 start, u64 end, struct writeback_control *wbc,
bool pages_dirty);
int btrfs_writepages(struct address_space *mapping, struct writeback_control *wbc);
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 199f783680e2..0b44a250e5b8 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -1758,7 +1758,8 @@ static noinline int run_delalloc_cow(struct btrfs_inode *inode,
true, false);
if (ret)
return ret;
- extent_write_locked_range(&inode->vfs_inode, locked_page, start,
+ extent_write_locked_range(&inode->vfs_inode,
+ page_folio(locked_page), start,
done_offset, wbc, pages_dirty);
start = done_offset + 1;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 24/46] btrfs: convert run_delalloc_cow to take a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (22 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 23/46] btrfs: convert extent_write_locked_range " Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 25/46] btrfs: convert cow_file_range_inline " Josef Bacik
` (23 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
We pass the folio into extent_write_locked_range, go ahead and take a
folio to pass along, and update the callers to pass in a folio.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/inode.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 0b44a250e5b8..db0aa7ece99c 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -116,7 +116,7 @@ static int btrfs_setsize(struct inode *inode, struct iattr *attr);
static int btrfs_truncate(struct btrfs_inode *inode, bool skip_writeback);
static noinline int run_delalloc_cow(struct btrfs_inode *inode,
- struct page *locked_page, u64 start,
+ struct folio *locked_folio, u64 start,
u64 end, struct writeback_control *wbc,
bool pages_dirty);
@@ -1135,7 +1135,8 @@ static void submit_uncompressed_range(struct btrfs_inode *inode,
};
wbc_attach_fdatawrite_inode(&wbc, &inode->vfs_inode);
- ret = run_delalloc_cow(inode, locked_page, start, end, &wbc, false);
+ ret = run_delalloc_cow(inode, page_folio(locked_page), start, end,
+ &wbc, false);
wbc_detach_inode(&wbc);
if (ret < 0) {
btrfs_cleanup_ordered_extents(inode, locked_page, start, end - start + 1);
@@ -1746,7 +1747,7 @@ static bool run_delalloc_compressed(struct btrfs_inode *inode,
* covered by the range.
*/
static noinline int run_delalloc_cow(struct btrfs_inode *inode,
- struct page *locked_page, u64 start,
+ struct folio *locked_folio, u64 start,
u64 end, struct writeback_control *wbc,
bool pages_dirty)
{
@@ -1754,13 +1755,12 @@ static noinline int run_delalloc_cow(struct btrfs_inode *inode,
int ret;
while (start <= end) {
- ret = cow_file_range(inode, locked_page, start, end, &done_offset,
- true, false);
+ ret = cow_file_range(inode, &locked_folio->page, start, end,
+ &done_offset, true, false);
if (ret)
return ret;
- extent_write_locked_range(&inode->vfs_inode,
- page_folio(locked_page), start,
- done_offset, wbc, pages_dirty);
+ extent_write_locked_range(&inode->vfs_inode, locked_folio,
+ start, done_offset, wbc, pages_dirty);
start = done_offset + 1;
}
@@ -2311,8 +2311,8 @@ int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct page *locked_page
return 1;
if (zoned)
- ret = run_delalloc_cow(inode, locked_page, start, end, wbc,
- true);
+ ret = run_delalloc_cow(inode, page_folio(locked_page), start,
+ end, wbc, true);
else
ret = cow_file_range(inode, locked_page, start, end, NULL,
false, false);
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 25/46] btrfs: convert cow_file_range_inline to take a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (23 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 24/46] btrfs: convert run_delalloc_cow " Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 26/46] btrfs: convert cow_file_range " Josef Bacik
` (22 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
Now that we want the folio in this function, convert it to take a folio
directly and use that.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/inode.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index db0aa7ece99c..7f2875c99883 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -715,7 +715,7 @@ static noinline int __cow_file_range_inline(struct btrfs_inode *inode, u64 offse
}
static noinline int cow_file_range_inline(struct btrfs_inode *inode,
- struct page *locked_page,
+ struct folio *locked_folio,
u64 offset, u64 end,
size_t compressed_size,
int compress_type,
@@ -741,10 +741,9 @@ static noinline int cow_file_range_inline(struct btrfs_inode *inode,
}
if (ret == 0)
- locked_page = NULL;
+ locked_folio = NULL;
- extent_clear_unlock_delalloc(inode, offset, end,
- page_folio(locked_page), &cached,
+ extent_clear_unlock_delalloc(inode, offset, end, locked_folio, &cached,
clear_flags, PAGE_UNLOCK |
PAGE_START_WRITEBACK | PAGE_END_WRITEBACK);
return ret;
@@ -1365,8 +1364,9 @@ static noinline int cow_file_range(struct btrfs_inode *inode,
if (!no_inline) {
/* lets try to make an inline extent */
- ret = cow_file_range_inline(inode, locked_page, start, end, 0,
- BTRFS_COMPRESS_NONE, NULL, false);
+ ret = cow_file_range_inline(inode, page_folio(locked_page),
+ start, end, 0, BTRFS_COMPRESS_NONE,
+ NULL, false);
if (ret <= 0) {
/*
* We succeeded, return 1 so the caller knows we're done
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 26/46] btrfs: convert cow_file_range to take a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (24 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 25/46] btrfs: convert cow_file_range_inline " Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 27/46] btrfs: convert fallback_to_cow " Josef Bacik
` (21 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
Convert this to take a folio and pass it into all of the various cleanup
functions. Update the callers to pass in a folio instead.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/inode.c | 45 +++++++++++++++++++++------------------------
1 file changed, 21 insertions(+), 24 deletions(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 7f2875c99883..9fc15b881dba 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -1307,21 +1307,21 @@ u64 btrfs_get_extent_allocation_hint(struct btrfs_inode *inode, u64 start,
* allocate extents on disk for the range, and create ordered data structs
* in ram to track those extents.
*
- * locked_page is the page that writepage had locked already. We use
+ * locked_folio is the folio that writepage had locked already. We use
* it to make sure we don't do extra locks or unlocks.
*
- * When this function fails, it unlocks all pages except @locked_page.
+ * When this function fails, it unlocks all pages except @locked_folio.
*
* When this function successfully creates an inline extent, it returns 1 and
- * unlocks all pages including locked_page and starts I/O on them.
- * (In reality inline extents are limited to a single page, so locked_page is
+ * unlocks all pages including locked_folio and starts I/O on them.
+ * (In reality inline extents are limited to a single page, so locked_folio is
* the only page handled anyway).
*
* When this function succeed and creates a normal extent, the page locking
* status depends on the passed in flags:
*
* - If @keep_locked is set, all pages are kept locked.
- * - Else all pages except for @locked_page are unlocked.
+ * - Else all pages except for @locked_folio are unlocked.
*
* When a failure happens in the second or later iteration of the
* while-loop, the ordered extents created in previous iterations are kept
@@ -1330,8 +1330,8 @@ u64 btrfs_get_extent_allocation_hint(struct btrfs_inode *inode, u64 start,
* example.
*/
static noinline int cow_file_range(struct btrfs_inode *inode,
- struct page *locked_page, u64 start, u64 end,
- u64 *done_offset,
+ struct folio *locked_folio, u64 start,
+ u64 end, u64 *done_offset,
bool keep_locked, bool no_inline)
{
struct btrfs_root *root = inode->root;
@@ -1364,9 +1364,8 @@ static noinline int cow_file_range(struct btrfs_inode *inode,
if (!no_inline) {
/* lets try to make an inline extent */
- ret = cow_file_range_inline(inode, page_folio(locked_page),
- start, end, 0, BTRFS_COMPRESS_NONE,
- NULL, false);
+ ret = cow_file_range_inline(inode, locked_folio, start, end, 0,
+ BTRFS_COMPRESS_NONE, NULL, false);
if (ret <= 0) {
/*
* We succeeded, return 1 so the caller knows we're done
@@ -1502,7 +1501,7 @@ static noinline int cow_file_range(struct btrfs_inode *inode,
page_ops |= PAGE_SET_ORDERED;
extent_clear_unlock_delalloc(inode, start, start + ram_size - 1,
- page_folio(locked_page), &cached,
+ locked_folio, &cached,
EXTENT_LOCKED | EXTENT_DELALLOC,
page_ops);
if (num_bytes < cur_alloc_size)
@@ -1555,14 +1554,13 @@ static noinline int cow_file_range(struct btrfs_inode *inode,
* function.
*
* However, in case of @keep_locked, we still need to unlock the pages
- * (except @locked_page) to ensure all the pages are unlocked.
+ * (except @locked_folio) to ensure all the pages are unlocked.
*/
if (keep_locked && orig_start < start) {
- if (!locked_page)
+ if (!locked_folio)
mapping_set_error(inode->vfs_inode.i_mapping, ret);
extent_clear_unlock_delalloc(inode, orig_start, start - 1,
- page_folio(locked_page), NULL, 0,
- page_ops);
+ locked_folio, NULL, 0, page_ops);
}
/*
@@ -1585,8 +1583,7 @@ static noinline int cow_file_range(struct btrfs_inode *inode,
if (extent_reserved) {
extent_clear_unlock_delalloc(inode, start,
start + cur_alloc_size - 1,
- page_folio(locked_page), &cached,
- clear_bits,
+ locked_folio, &cached, clear_bits,
page_ops);
btrfs_qgroup_free_data(inode, NULL, start, cur_alloc_size, NULL);
start += cur_alloc_size;
@@ -1600,9 +1597,8 @@ static noinline int cow_file_range(struct btrfs_inode *inode,
*/
if (start < end) {
clear_bits |= EXTENT_CLEAR_DATA_RESV;
- extent_clear_unlock_delalloc(inode, start, end,
- page_folio(locked_page), &cached,
- clear_bits, page_ops);
+ extent_clear_unlock_delalloc(inode, start, end, locked_folio,
+ &cached, clear_bits, page_ops);
btrfs_qgroup_free_data(inode, NULL, start, cur_alloc_size, NULL);
}
return ret;
@@ -1755,7 +1751,7 @@ static noinline int run_delalloc_cow(struct btrfs_inode *inode,
int ret;
while (start <= end) {
- ret = cow_file_range(inode, &locked_folio->page, start, end,
+ ret = cow_file_range(inode, locked_folio, start, end,
&done_offset, true, false);
if (ret)
return ret;
@@ -1837,7 +1833,8 @@ static int fallback_to_cow(struct btrfs_inode *inode, struct page *locked_page,
* is written out and unlocked directly and a normal NOCOW extent
* doesn't work.
*/
- ret = cow_file_range(inode, locked_page, start, end, NULL, false, true);
+ ret = cow_file_range(inode, page_folio(locked_page), start, end, NULL,
+ false, true);
ASSERT(ret != 1);
return ret;
}
@@ -2314,8 +2311,8 @@ int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct page *locked_page
ret = run_delalloc_cow(inode, page_folio(locked_page), start,
end, wbc, true);
else
- ret = cow_file_range(inode, locked_page, start, end, NULL,
- false, false);
+ ret = cow_file_range(inode, page_folio(locked_page), start, end,
+ NULL, false, false);
out:
if (ret < 0)
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 27/46] btrfs: convert fallback_to_cow to take a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (25 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 26/46] btrfs: convert cow_file_range " Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 28/46] btrfs: convert run_delalloc_nocow " Josef Bacik
` (20 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
With this we can pass the folio directly into cow_file_range().
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/inode.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 9fc15b881dba..d8ff1bb188e1 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -1763,8 +1763,9 @@ static noinline int run_delalloc_cow(struct btrfs_inode *inode,
return 1;
}
-static int fallback_to_cow(struct btrfs_inode *inode, struct page *locked_page,
- const u64 start, const u64 end)
+static int fallback_to_cow(struct btrfs_inode *inode,
+ struct folio *locked_folio, const u64 start,
+ const u64 end)
{
const bool is_space_ino = btrfs_is_free_space_inode(inode);
const bool is_reloc_ino = btrfs_is_data_reloc_root(inode->root);
@@ -1833,8 +1834,8 @@ static int fallback_to_cow(struct btrfs_inode *inode, struct page *locked_page,
* is written out and unlocked directly and a normal NOCOW extent
* doesn't work.
*/
- ret = cow_file_range(inode, page_folio(locked_page), start, end, NULL,
- false, true);
+ ret = cow_file_range(inode, locked_folio, start, end, NULL, false,
+ true);
ASSERT(ret != 1);
return ret;
}
@@ -2151,7 +2152,7 @@ static noinline int run_delalloc_nocow(struct btrfs_inode *inode,
* NOCOW, following one which needs to be COW'ed
*/
if (cow_start != (u64)-1) {
- ret = fallback_to_cow(inode, locked_page,
+ ret = fallback_to_cow(inode, page_folio(locked_page),
cow_start, found_key.offset - 1);
cow_start = (u64)-1;
if (ret) {
@@ -2230,7 +2231,8 @@ static noinline int run_delalloc_nocow(struct btrfs_inode *inode,
if (cow_start != (u64)-1) {
cur_offset = end;
- ret = fallback_to_cow(inode, locked_page, cow_start, end);
+ ret = fallback_to_cow(inode, page_folio(locked_page), cow_start,
+ end);
cow_start = (u64)-1;
if (ret)
goto error;
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 28/46] btrfs: convert run_delalloc_nocow to take a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (26 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 27/46] btrfs: convert fallback_to_cow " Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 29/46] btrfs: convert btrfs_cleanup_ordered_extents to use folios Josef Bacik
` (19 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
Now all of the functions that use locked_page in run_delalloc_nocow take
a folio, update it to take a folio and update the caller.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/inode.c | 17 ++++++++---------
1 file changed, 8 insertions(+), 9 deletions(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index d8ff1bb188e1..a95bbe602a90 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -1989,7 +1989,7 @@ static int can_nocow_file_extent(struct btrfs_path *path,
* blocks on disk
*/
static noinline int run_delalloc_nocow(struct btrfs_inode *inode,
- struct page *locked_page,
+ struct folio *locked_folio,
const u64 start, const u64 end)
{
struct btrfs_fs_info *fs_info = inode->root->fs_info;
@@ -2152,8 +2152,8 @@ static noinline int run_delalloc_nocow(struct btrfs_inode *inode,
* NOCOW, following one which needs to be COW'ed
*/
if (cow_start != (u64)-1) {
- ret = fallback_to_cow(inode, page_folio(locked_page),
- cow_start, found_key.offset - 1);
+ ret = fallback_to_cow(inode, locked_folio, cow_start,
+ found_key.offset - 1);
cow_start = (u64)-1;
if (ret) {
btrfs_dec_nocow_writers(nocow_bg);
@@ -2208,8 +2208,7 @@ static noinline int run_delalloc_nocow(struct btrfs_inode *inode,
btrfs_put_ordered_extent(ordered);
extent_clear_unlock_delalloc(inode, cur_offset, nocow_end,
- page_folio(locked_page),
- &cached_state,
+ locked_folio, &cached_state,
EXTENT_LOCKED | EXTENT_DELALLOC |
EXTENT_CLEAR_DATA_RESV,
PAGE_UNLOCK | PAGE_SET_ORDERED);
@@ -2231,8 +2230,7 @@ static noinline int run_delalloc_nocow(struct btrfs_inode *inode,
if (cow_start != (u64)-1) {
cur_offset = end;
- ret = fallback_to_cow(inode, page_folio(locked_page), cow_start,
- end);
+ ret = fallback_to_cow(inode, locked_folio, cow_start, end);
cow_start = (u64)-1;
if (ret)
goto error;
@@ -2259,7 +2257,7 @@ static noinline int run_delalloc_nocow(struct btrfs_inode *inode,
lock_extent(&inode->io_tree, cur_offset, end, &cached);
extent_clear_unlock_delalloc(inode, cur_offset, end,
- page_folio(locked_page), &cached,
+ locked_folio, &cached,
EXTENT_LOCKED | EXTENT_DELALLOC |
EXTENT_DEFRAG |
EXTENT_DO_ACCOUNTING, PAGE_UNLOCK |
@@ -2300,7 +2298,8 @@ int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct page *locked_page
start >= page_offset(locked_page) + PAGE_SIZE));
if (should_nocow(inode, start, end)) {
- ret = run_delalloc_nocow(inode, locked_page, start, end);
+ ret = run_delalloc_nocow(inode, page_folio(locked_page), start,
+ end);
goto out;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 29/46] btrfs: convert btrfs_cleanup_ordered_extents to use folios
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (27 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 28/46] btrfs: convert run_delalloc_nocow " Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 30/46] btrfs: convert btrfs_cleanup_ordered_extents to take a folio Josef Bacik
` (18 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
We walk through pages in this function and clear ordered, and the
function for this uses folios. Update the function to use a folio for
this whole operation.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/inode.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index a95bbe602a90..d1c81a368b52 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -399,7 +399,7 @@ static inline void btrfs_cleanup_ordered_extents(struct btrfs_inode *inode,
unsigned long index = offset >> PAGE_SHIFT;
unsigned long end_index = (offset + bytes - 1) >> PAGE_SHIFT;
u64 page_start = 0, page_end = 0;
- struct page *page;
+ struct folio *folio;
if (locked_page) {
page_start = page_offset(locked_page);
@@ -421,9 +421,9 @@ static inline void btrfs_cleanup_ordered_extents(struct btrfs_inode *inode,
index++;
continue;
}
- page = find_get_page(inode->vfs_inode.i_mapping, index);
+ folio = __filemap_get_folio(inode->vfs_inode.i_mapping, index, 0, 0);
index++;
- if (!page)
+ if (IS_ERR(folio))
continue;
/*
@@ -431,9 +431,9 @@ static inline void btrfs_cleanup_ordered_extents(struct btrfs_inode *inode,
* range, then btrfs_mark_ordered_io_finished() will handle
* the ordered extent accounting for the range.
*/
- btrfs_folio_clamp_clear_ordered(inode->root->fs_info,
- page_folio(page), offset, bytes);
- put_page(page);
+ btrfs_folio_clamp_clear_ordered(inode->root->fs_info, folio,
+ offset, bytes);
+ folio_put(folio);
}
if (locked_page) {
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 30/46] btrfs: convert btrfs_cleanup_ordered_extents to take a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (28 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 29/46] btrfs: convert btrfs_cleanup_ordered_extents to use folios Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 31/46] btrfs: convert run_delalloc_compressed " Josef Bacik
` (17 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
Now that btrfs_cleanup_ordered_extents is operating mostly with folios,
update it to use a folio instead of a page, and the update the function
and the callers as appropriate.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/inode.c | 26 ++++++++++++++------------
1 file changed, 14 insertions(+), 12 deletions(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index d1c81a368b52..76fa9b1e0f11 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -393,7 +393,7 @@ void btrfs_inode_unlock(struct btrfs_inode *inode, unsigned int ilock_flags)
* extent (btrfs_finish_ordered_io()).
*/
static inline void btrfs_cleanup_ordered_extents(struct btrfs_inode *inode,
- struct page *locked_page,
+ struct folio *locked_folio,
u64 offset, u64 bytes)
{
unsigned long index = offset >> PAGE_SHIFT;
@@ -401,9 +401,9 @@ static inline void btrfs_cleanup_ordered_extents(struct btrfs_inode *inode,
u64 page_start = 0, page_end = 0;
struct folio *folio;
- if (locked_page) {
- page_start = page_offset(locked_page);
- page_end = page_start + PAGE_SIZE - 1;
+ if (locked_folio) {
+ page_start = folio_pos(locked_folio);
+ page_end = page_start + folio_size(locked_folio) - 1;
}
while (index <= end_index) {
@@ -417,7 +417,7 @@ static inline void btrfs_cleanup_ordered_extents(struct btrfs_inode *inode,
* btrfs_mark_ordered_io_finished() would skip the accounting
* for the page range, and the ordered extent will never finish.
*/
- if (locked_page && index == (page_start >> PAGE_SHIFT)) {
+ if (locked_folio && index == (page_start >> PAGE_SHIFT)) {
index++;
continue;
}
@@ -436,9 +436,9 @@ static inline void btrfs_cleanup_ordered_extents(struct btrfs_inode *inode,
folio_put(folio);
}
- if (locked_page) {
+ if (locked_folio) {
/* The locked page covers the full range, nothing needs to be done */
- if (bytes + offset <= page_start + PAGE_SIZE)
+ if (bytes + offset <= page_start + folio_size(locked_folio))
return;
/*
* In case this page belongs to the delalloc range being
@@ -447,8 +447,9 @@ static inline void btrfs_cleanup_ordered_extents(struct btrfs_inode *inode,
* run_delalloc_range
*/
if (page_start >= offset && page_end <= (offset + bytes - 1)) {
- bytes = offset + bytes - page_offset(locked_page) - PAGE_SIZE;
- offset = page_offset(locked_page) + PAGE_SIZE;
+ bytes = offset + bytes - folio_pos(locked_folio) -
+ folio_size(locked_folio);
+ offset = folio_pos(locked_folio) + folio_size(locked_folio);
}
}
@@ -1138,7 +1139,8 @@ static void submit_uncompressed_range(struct btrfs_inode *inode,
&wbc, false);
wbc_detach_inode(&wbc);
if (ret < 0) {
- btrfs_cleanup_ordered_extents(inode, locked_page, start, end - start + 1);
+ btrfs_cleanup_ordered_extents(inode, page_folio(locked_page),
+ start, end - start + 1);
if (locked_page) {
const u64 page_start = page_offset(locked_page);
@@ -2317,8 +2319,8 @@ int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct page *locked_page
out:
if (ret < 0)
- btrfs_cleanup_ordered_extents(inode, locked_page, start,
- end - start + 1);
+ btrfs_cleanup_ordered_extents(inode, page_folio(locked_page),
+ start, end - start + 1);
return ret;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 31/46] btrfs: convert run_delalloc_compressed to take a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (29 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 30/46] btrfs: convert btrfs_cleanup_ordered_extents to take a folio Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 32/46] btrfs: convert btrfs_run_delalloc_range " Josef Bacik
` (16 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
This just passes the page into the compressed machinery to keep track of
the locked page. Update this to take a folio and convert it to a page
where appropriate.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/inode.c | 20 +++++++++++---------
1 file changed, 11 insertions(+), 9 deletions(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 76fa9b1e0f11..23ab3000c5fd 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -1653,7 +1653,7 @@ static noinline void submit_compressed_extents(struct btrfs_work *work, bool do_
}
static bool run_delalloc_compressed(struct btrfs_inode *inode,
- struct page *locked_page, u64 start,
+ struct folio *locked_folio, u64 start,
u64 end, struct writeback_control *wbc)
{
struct btrfs_fs_info *fs_info = inode->root->fs_info;
@@ -1693,15 +1693,16 @@ static bool run_delalloc_compressed(struct btrfs_inode *inode,
INIT_LIST_HEAD(&async_chunk[i].extents);
/*
- * The locked_page comes all the way from writepage and its
- * the original page we were actually given. As we spread
+ * The locked_folio comes all the way from writepage and its
+ * the original folio we were actually given. As we spread
* this large delalloc region across multiple async_chunk
- * structs, only the first struct needs a pointer to locked_page
+ * structs, only the first struct needs a pointer to
+ * locked_folio.
*
* This way we don't need racey decisions about who is supposed
* to unlock it.
*/
- if (locked_page) {
+ if (locked_folio) {
/*
* Depending on the compressibility, the pages might or
* might not go through async. We want all of them to
@@ -1711,10 +1712,10 @@ static bool run_delalloc_compressed(struct btrfs_inode *inode,
* need full accuracy. Just account the whole thing
* against the first page.
*/
- wbc_account_cgroup_owner(wbc, locked_page,
+ wbc_account_cgroup_owner(wbc, &locked_folio->page,
cur_end - start);
- async_chunk[i].locked_page = locked_page;
- locked_page = NULL;
+ async_chunk[i].locked_page = &locked_folio->page;
+ locked_folio = NULL;
} else {
async_chunk[i].locked_page = NULL;
}
@@ -2307,7 +2308,8 @@ int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct page *locked_page
if (btrfs_inode_can_compress(inode) &&
inode_need_compress(inode, start, end) &&
- run_delalloc_compressed(inode, locked_page, start, end, wbc))
+ run_delalloc_compressed(inode, page_folio(locked_page), start, end,
+ wbc))
return 1;
if (zoned)
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 32/46] btrfs: convert btrfs_run_delalloc_range to take a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (30 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 31/46] btrfs: convert run_delalloc_compressed " Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 33/46] btrfs: convert async_chunk to hold " Josef Bacik
` (15 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
Now that every function that btrfs_run_delalloc_range calls takes a
folio, update it to take a folio and update the callers.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/btrfs_inode.h | 2 +-
fs/btrfs/extent_io.c | 2 +-
fs/btrfs/inode.c | 26 ++++++++++++--------------
3 files changed, 14 insertions(+), 16 deletions(-)
diff --git a/fs/btrfs/btrfs_inode.h b/fs/btrfs/btrfs_inode.h
index 3056c8aed8ef..5599b458a9a9 100644
--- a/fs/btrfs/btrfs_inode.h
+++ b/fs/btrfs/btrfs_inode.h
@@ -596,7 +596,7 @@ int btrfs_prealloc_file_range_trans(struct inode *inode,
struct btrfs_trans_handle *trans, int mode,
u64 start, u64 num_bytes, u64 min_size,
loff_t actual_len, u64 *alloc_hint);
-int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct page *locked_page,
+int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct folio *locked_folio,
u64 start, u64 end, struct writeback_control *wbc);
int btrfs_writepage_cow_fixup(struct page *page);
int btrfs_encoded_io_compression_from_extent(struct btrfs_fs_info *fs_info,
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 1faadf903e00..2f46a85888b9 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -1254,7 +1254,7 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
if (ret >= 0) {
/* No errors hit so far, run the current delalloc range. */
- ret = btrfs_run_delalloc_range(inode, &folio->page,
+ ret = btrfs_run_delalloc_range(inode, folio,
found_start,
found_start + found_len - 1,
wbc);
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 23ab3000c5fd..a16b9aba7f96 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -2287,42 +2287,40 @@ static bool should_nocow(struct btrfs_inode *inode, u64 start, u64 end)
* Function to process delayed allocation (create CoW) for ranges which are
* being touched for the first time.
*/
-int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct page *locked_page,
+int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct folio *locked_folio,
u64 start, u64 end, struct writeback_control *wbc)
{
const bool zoned = btrfs_is_zoned(inode->root->fs_info);
int ret;
/*
- * The range must cover part of the @locked_page, or a return of 1
+ * The range must cover part of the @locked_folio, or a return of 1
* can confuse the caller.
*/
- ASSERT(!(end <= page_offset(locked_page) ||
- start >= page_offset(locked_page) + PAGE_SIZE));
+ ASSERT(!(end <= folio_pos(locked_folio) ||
+ start >= folio_pos(locked_folio) + folio_size(locked_folio)));
if (should_nocow(inode, start, end)) {
- ret = run_delalloc_nocow(inode, page_folio(locked_page), start,
- end);
+ ret = run_delalloc_nocow(inode, locked_folio, start, end);
goto out;
}
if (btrfs_inode_can_compress(inode) &&
inode_need_compress(inode, start, end) &&
- run_delalloc_compressed(inode, page_folio(locked_page), start, end,
- wbc))
+ run_delalloc_compressed(inode, locked_folio, start, end, wbc))
return 1;
if (zoned)
- ret = run_delalloc_cow(inode, page_folio(locked_page), start,
- end, wbc, true);
+ ret = run_delalloc_cow(inode, locked_folio, start, end, wbc,
+ true);
else
- ret = cow_file_range(inode, page_folio(locked_page), start, end,
- NULL, false, false);
+ ret = cow_file_range(inode, locked_folio, start, end, NULL,
+ false, false);
out:
if (ret < 0)
- btrfs_cleanup_ordered_extents(inode, page_folio(locked_page),
- start, end - start + 1);
+ btrfs_cleanup_ordered_extents(inode, locked_folio, start,
+ end - start + 1);
return ret;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 33/46] btrfs: convert async_chunk to hold a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (31 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 32/46] btrfs: convert btrfs_run_delalloc_range " Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 34/46] btrfs: convert submit_uncompressed_range to take " Josef Bacik
` (14 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
Instead of passing in the page for ->locked_page, make it hold a
locked_folio and then update the users of async_chunk to act
accordingly.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/inode.c | 26 ++++++++++++++------------
1 file changed, 14 insertions(+), 12 deletions(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index a16b9aba7f96..fbb21deef54c 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -762,7 +762,7 @@ struct async_extent {
struct async_chunk {
struct btrfs_inode *inode;
- struct page *locked_page;
+ struct folio *locked_folio;
u64 start;
u64 end;
blk_opf_t write_flags;
@@ -1167,7 +1167,7 @@ static void submit_one_async_extent(struct async_chunk *async_chunk,
struct btrfs_ordered_extent *ordered;
struct btrfs_file_extent file_extent;
struct btrfs_key ins;
- struct page *locked_page = NULL;
+ struct folio *locked_folio = NULL;
struct extent_state *cached = NULL;
struct extent_map *em;
int ret = 0;
@@ -1178,19 +1178,20 @@ static void submit_one_async_extent(struct async_chunk *async_chunk,
kthread_associate_blkcg(async_chunk->blkcg_css);
/*
- * If async_chunk->locked_page is in the async_extent range, we need to
+ * If async_chunk->locked_folio is in the async_extent range, we need to
* handle it.
*/
- if (async_chunk->locked_page) {
- u64 locked_page_start = page_offset(async_chunk->locked_page);
- u64 locked_page_end = locked_page_start + PAGE_SIZE - 1;
+ if (async_chunk->locked_folio) {
+ u64 locked_folio_start = folio_pos(async_chunk->locked_folio);
+ u64 locked_folio_end = locked_folio_start +
+ folio_size(async_chunk->locked_folio) - 1;
- if (!(start >= locked_page_end || end <= locked_page_start))
- locked_page = async_chunk->locked_page;
+ if (!(start >= locked_folio_end || end <= locked_folio_start))
+ locked_folio = async_chunk->locked_folio;
}
if (async_extent->compress_type == BTRFS_COMPRESS_NONE) {
- submit_uncompressed_range(inode, async_extent, locked_page);
+ submit_uncompressed_range(inode, async_extent, &locked_folio->page);
goto done;
}
@@ -1205,7 +1206,8 @@ static void submit_one_async_extent(struct async_chunk *async_chunk,
* non-contiguous space for the uncompressed size instead. So
* fall back to uncompressed.
*/
- submit_uncompressed_range(inode, async_extent, locked_page);
+ submit_uncompressed_range(inode, async_extent,
+ &locked_folio->page);
goto done;
}
@@ -1714,10 +1716,10 @@ static bool run_delalloc_compressed(struct btrfs_inode *inode,
*/
wbc_account_cgroup_owner(wbc, &locked_folio->page,
cur_end - start);
- async_chunk[i].locked_page = &locked_folio->page;
+ async_chunk[i].locked_folio = locked_folio;
locked_folio = NULL;
} else {
- async_chunk[i].locked_page = NULL;
+ async_chunk[i].locked_folio = NULL;
}
if (blkcg_css != blkcg_root_css) {
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 34/46] btrfs: convert submit_uncompressed_range to take a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (32 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 33/46] btrfs: convert async_chunk to hold " Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 35/46] btrfs: convert btrfs_writepage_fixup_worker to use " Josef Bacik
` (13 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
This mostly uses folios already, update it to take a folio and update
the rest of the function to use the folio instead of the page.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/inode.c | 26 ++++++++++++--------------
1 file changed, 12 insertions(+), 14 deletions(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index fbb21deef54c..737af2d6bebe 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -1122,7 +1122,7 @@ static void free_async_extent_pages(struct async_extent *async_extent)
static void submit_uncompressed_range(struct btrfs_inode *inode,
struct async_extent *async_extent,
- struct page *locked_page)
+ struct folio *locked_folio)
{
u64 start = async_extent->start;
u64 end = async_extent->start + async_extent->ram_size - 1;
@@ -1135,23 +1135,22 @@ static void submit_uncompressed_range(struct btrfs_inode *inode,
};
wbc_attach_fdatawrite_inode(&wbc, &inode->vfs_inode);
- ret = run_delalloc_cow(inode, page_folio(locked_page), start, end,
+ ret = run_delalloc_cow(inode, locked_folio, start, end,
&wbc, false);
wbc_detach_inode(&wbc);
if (ret < 0) {
- btrfs_cleanup_ordered_extents(inode, page_folio(locked_page),
+ btrfs_cleanup_ordered_extents(inode, locked_folio,
start, end - start + 1);
- if (locked_page) {
- const u64 page_start = page_offset(locked_page);
+ if (locked_folio) {
+ const u64 page_start = folio_pos(locked_folio);
- set_page_writeback(locked_page);
- end_page_writeback(locked_page);
- btrfs_mark_ordered_io_finished(inode,
- page_folio(locked_page),
+ folio_start_writeback(locked_folio);
+ folio_end_writeback(locked_folio);
+ btrfs_mark_ordered_io_finished(inode, locked_folio,
page_start, PAGE_SIZE,
!ret);
- mapping_set_error(locked_page->mapping, ret);
- unlock_page(locked_page);
+ mapping_set_error(locked_folio->mapping, ret);
+ folio_unlock(locked_folio);
}
}
}
@@ -1191,7 +1190,7 @@ static void submit_one_async_extent(struct async_chunk *async_chunk,
}
if (async_extent->compress_type == BTRFS_COMPRESS_NONE) {
- submit_uncompressed_range(inode, async_extent, &locked_folio->page);
+ submit_uncompressed_range(inode, async_extent, locked_folio);
goto done;
}
@@ -1206,8 +1205,7 @@ static void submit_one_async_extent(struct async_chunk *async_chunk,
* non-contiguous space for the uncompressed size instead. So
* fall back to uncompressed.
*/
- submit_uncompressed_range(inode, async_extent,
- &locked_folio->page);
+ submit_uncompressed_range(inode, async_extent, locked_folio);
goto done;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 35/46] btrfs: convert btrfs_writepage_fixup_worker to use a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (33 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 34/46] btrfs: convert submit_uncompressed_range to take " Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 36/46] btrfs: convert btrfs_writepage_cow_fixup to use folio Josef Bacik
` (12 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
This function heavily messes with pages, instead update it to use a
folio.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/inode.c | 54 +++++++++++++++++++++++++-----------------------
1 file changed, 28 insertions(+), 26 deletions(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 737af2d6bebe..cd1b3e956d7f 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -2708,49 +2708,51 @@ static void btrfs_writepage_fixup_worker(struct btrfs_work *work)
struct extent_state *cached_state = NULL;
struct extent_changeset *data_reserved = NULL;
struct page *page = fixup->page;
+ struct folio *folio = page_folio(page);
struct btrfs_inode *inode = fixup->inode;
struct btrfs_fs_info *fs_info = inode->root->fs_info;
- u64 page_start = page_offset(page);
- u64 page_end = page_offset(page) + PAGE_SIZE - 1;
+ u64 page_start = folio_pos(folio);
+ u64 page_end = folio_pos(folio) + folio_size(folio) - 1;
int ret = 0;
bool free_delalloc_space = true;
/*
* This is similar to page_mkwrite, we need to reserve the space before
- * we take the page lock.
+ * we take the folio lock.
*/
ret = btrfs_delalloc_reserve_space(inode, &data_reserved, page_start,
- PAGE_SIZE);
+ folio_size(folio));
again:
- lock_page(page);
+ folio_lock(folio);
/*
- * Before we queued this fixup, we took a reference on the page.
- * page->mapping may go NULL, but it shouldn't be moved to a different
+ * Before we queued this fixup, we took a reference on the folio.
+ * folio->mapping may go NULL, but it shouldn't be moved to a different
* address space.
*/
- if (!page->mapping || !PageDirty(page) || !PageChecked(page)) {
+ if (!folio->mapping || !folio_test_dirty(folio) ||
+ !folio_test_checked(folio)) {
/*
* Unfortunately this is a little tricky, either
*
- * 1) We got here and our page had already been dealt with and
+ * 1) We got here and our folio had already been dealt with and
* we reserved our space, thus ret == 0, so we need to just
* drop our space reservation and bail. This can happen the
* first time we come into the fixup worker, or could happen
* while waiting for the ordered extent.
- * 2) Our page was already dealt with, but we happened to get an
+ * 2) Our folio was already dealt with, but we happened to get an
* ENOSPC above from the btrfs_delalloc_reserve_space. In
* this case we obviously don't have anything to release, but
- * because the page was already dealt with we don't want to
- * mark the page with an error, so make sure we're resetting
+ * because the folio was already dealt with we don't want to
+ * mark the folio with an error, so make sure we're resetting
* ret to 0. This is why we have this check _before_ the ret
* check, because we do not want to have a surprise ENOSPC
- * when the page was already properly dealt with.
+ * when the folio was already properly dealt with.
*/
if (!ret) {
- btrfs_delalloc_release_extents(inode, PAGE_SIZE);
+ btrfs_delalloc_release_extents(inode, folio_size(folio));
btrfs_delalloc_release_space(inode, data_reserved,
- page_start, PAGE_SIZE,
+ page_start, folio_size(folio),
true);
}
ret = 0;
@@ -2758,7 +2760,7 @@ static void btrfs_writepage_fixup_worker(struct btrfs_work *work)
}
/*
- * We can't mess with the page state unless it is locked, so now that
+ * We can't mess with the folio state unless it is locked, so now that
* it is locked bail if we failed to make our space reservation.
*/
if (ret)
@@ -2767,14 +2769,14 @@ static void btrfs_writepage_fixup_worker(struct btrfs_work *work)
lock_extent(&inode->io_tree, page_start, page_end, &cached_state);
/* already ordered? We're done */
- if (PageOrdered(page))
+ if (folio_test_ordered(folio))
goto out_reserved;
ordered = btrfs_lookup_ordered_range(inode, page_start, PAGE_SIZE);
if (ordered) {
unlock_extent(&inode->io_tree, page_start, page_end,
&cached_state);
- unlock_page(page);
+ folio_unlock(folio);
btrfs_start_ordered_extent(ordered);
btrfs_put_ordered_extent(ordered);
goto again;
@@ -2792,7 +2794,7 @@ static void btrfs_writepage_fixup_worker(struct btrfs_work *work)
*
* The page was dirty when we started, nothing should have cleaned it.
*/
- BUG_ON(!PageDirty(page));
+ BUG_ON(!folio_test_dirty(folio));
free_delalloc_space = false;
out_reserved:
btrfs_delalloc_release_extents(inode, PAGE_SIZE);
@@ -2806,14 +2808,14 @@ static void btrfs_writepage_fixup_worker(struct btrfs_work *work)
* We hit ENOSPC or other errors. Update the mapping and page
* to reflect the errors and clean the page.
*/
- mapping_set_error(page->mapping, ret);
- btrfs_mark_ordered_io_finished(inode, page_folio(page),
- page_start, PAGE_SIZE, !ret);
- clear_page_dirty_for_io(page);
+ mapping_set_error(folio->mapping, ret);
+ btrfs_mark_ordered_io_finished(inode, folio, page_start,
+ folio_size(folio), !ret);
+ folio_clear_dirty_for_io(folio);
}
- btrfs_folio_clear_checked(fs_info, page_folio(page), page_start, PAGE_SIZE);
- unlock_page(page);
- put_page(page);
+ btrfs_folio_clear_checked(fs_info, folio, page_start, PAGE_SIZE);
+ folio_unlock(folio);
+ folio_put(folio);
kfree(fixup);
extent_changeset_free(data_reserved);
/*
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 36/46] btrfs: convert btrfs_writepage_cow_fixup to use folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (34 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 35/46] btrfs: convert btrfs_writepage_fixup_worker to use " Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 37/46] btrfs: convert btrfs_writepage_fixup to use a folio Josef Bacik
` (11 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
Instead of a page, use a folio for btrfs_writepage_cow_fixup. We
already have a folio at the only caller, and the fixup worker uses
folios.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/btrfs_inode.h | 2 +-
fs/btrfs/extent_io.c | 2 +-
fs/btrfs/inode.c | 31 ++++++++++++++++---------------
3 files changed, 18 insertions(+), 17 deletions(-)
diff --git a/fs/btrfs/btrfs_inode.h b/fs/btrfs/btrfs_inode.h
index 5599b458a9a9..fc60c0cde479 100644
--- a/fs/btrfs/btrfs_inode.h
+++ b/fs/btrfs/btrfs_inode.h
@@ -598,7 +598,7 @@ int btrfs_prealloc_file_range_trans(struct inode *inode,
loff_t actual_len, u64 *alloc_hint);
int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct folio *locked_folio,
u64 start, u64 end, struct writeback_control *wbc);
-int btrfs_writepage_cow_fixup(struct page *page);
+int btrfs_writepage_cow_fixup(struct folio *folio);
int btrfs_encoded_io_compression_from_extent(struct btrfs_fs_info *fs_info,
int compress_type);
int btrfs_encoded_read_regular_fill_pages(struct btrfs_inode *inode,
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 2f46a85888b9..ab5715de5f40 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -1410,7 +1410,7 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode,
ASSERT(start >= folio_pos(folio) &&
start + len <= folio_pos(folio) + folio_size(folio));
- ret = btrfs_writepage_cow_fixup(&folio->page);
+ ret = btrfs_writepage_cow_fixup(folio);
if (ret) {
/* Fixup worker will requeue */
folio_redirty_for_writepage(bio_ctrl->wbc, folio);
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index cd1b3e956d7f..9234ae84175a 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -2828,33 +2828,34 @@ static void btrfs_writepage_fixup_worker(struct btrfs_work *work)
/*
* There are a few paths in the higher layers of the kernel that directly
- * set the page dirty bit without asking the filesystem if it is a
+ * set the folio dirty bit without asking the filesystem if it is a
* good idea. This causes problems because we want to make sure COW
* properly happens and the data=ordered rules are followed.
*
* In our case any range that doesn't have the ORDERED bit set
* hasn't been properly setup for IO. We kick off an async process
* to fix it up. The async helper will wait for ordered extents, set
- * the delalloc bit and make it safe to write the page.
+ * the delalloc bit and make it safe to write the folio.
*/
-int btrfs_writepage_cow_fixup(struct page *page)
+int btrfs_writepage_cow_fixup(struct folio *folio)
{
- struct inode *inode = page->mapping->host;
+ struct inode *inode = folio->mapping->host;
struct btrfs_fs_info *fs_info = inode_to_fs_info(inode);
struct btrfs_writepage_fixup *fixup;
- /* This page has ordered extent covering it already */
- if (PageOrdered(page))
+ /* This folio has ordered extent covering it already */
+ if (folio_test_ordered(folio))
return 0;
/*
- * PageChecked is set below when we create a fixup worker for this page,
- * don't try to create another one if we're already PageChecked()
+ * folio_checked is set below when we create a fixup worker for this
+ * folio, don't try to create another one if we're already
+ * folio_test_checked.
*
- * The extent_io writepage code will redirty the page if we send back
+ * The extent_io writepage code will redirty the foio if we send back
* EAGAIN.
*/
- if (PageChecked(page))
+ if (folio_test_checked(folio))
return -EAGAIN;
fixup = kzalloc(sizeof(*fixup), GFP_NOFS);
@@ -2864,14 +2865,14 @@ int btrfs_writepage_cow_fixup(struct page *page)
/*
* We are already holding a reference to this inode from
* write_cache_pages. We need to hold it because the space reservation
- * takes place outside of the page lock, and we can't trust
- * page->mapping outside of the page lock.
+ * takes place outside of the folio lock, and we can't trust
+ * page->mapping outside of the folio lock.
*/
ihold(inode);
- btrfs_folio_set_checked(fs_info, page_folio(page), page_offset(page), PAGE_SIZE);
- get_page(page);
+ btrfs_folio_set_checked(fs_info, folio, folio_pos(folio), folio_size(folio));
+ folio_get(folio);
btrfs_init_work(&fixup->work, btrfs_writepage_fixup_worker, NULL);
- fixup->page = page;
+ fixup->page = &folio->page;
fixup->inode = BTRFS_I(inode);
btrfs_queue_work(fs_info->fixup_workers, &fixup->work);
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 37/46] btrfs: convert btrfs_writepage_fixup to use a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (35 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 36/46] btrfs: convert btrfs_writepage_cow_fixup to use folio Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 38/46] btrfs: convert uncompress_inline to take " Josef Bacik
` (10 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
Now the fixup creator and consumer use folios, change this to use a
folio as well.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/inode.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 9234ae84175a..0667da7b1895 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -2695,7 +2695,7 @@ int btrfs_set_extent_delalloc(struct btrfs_inode *inode, u64 start, u64 end,
/* see btrfs_writepage_start_hook for details on why this is required */
struct btrfs_writepage_fixup {
- struct page *page;
+ struct folio *folio;
struct btrfs_inode *inode;
struct btrfs_work work;
};
@@ -2707,8 +2707,7 @@ static void btrfs_writepage_fixup_worker(struct btrfs_work *work)
struct btrfs_ordered_extent *ordered;
struct extent_state *cached_state = NULL;
struct extent_changeset *data_reserved = NULL;
- struct page *page = fixup->page;
- struct folio *folio = page_folio(page);
+ struct folio *folio = fixup->folio;
struct btrfs_inode *inode = fixup->inode;
struct btrfs_fs_info *fs_info = inode->root->fs_info;
u64 page_start = folio_pos(folio);
@@ -2872,7 +2871,7 @@ int btrfs_writepage_cow_fixup(struct folio *folio)
btrfs_folio_set_checked(fs_info, folio, folio_pos(folio), folio_size(folio));
folio_get(folio);
btrfs_init_work(&fixup->work, btrfs_writepage_fixup_worker, NULL);
- fixup->page = &folio->page;
+ fixup->folio = folio;
fixup->inode = BTRFS_I(inode);
btrfs_queue_work(fs_info->fixup_workers, &fixup->work);
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 38/46] btrfs: convert uncompress_inline to take a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (36 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 37/46] btrfs: convert btrfs_writepage_fixup to use a folio Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 39/46] btrfs: convert read_inline_extent to use " Josef Bacik
` (9 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
Update uncompress_inline to take a folio and update it's usage
accordingly.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/inode.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 0667da7b1895..560575a5de03 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -6706,7 +6706,7 @@ static int btrfs_mkdir(struct mnt_idmap *idmap, struct inode *dir,
}
static noinline int uncompress_inline(struct btrfs_path *path,
- struct page *page,
+ struct folio *folio,
struct btrfs_file_extent_item *item)
{
int ret;
@@ -6728,7 +6728,8 @@ static noinline int uncompress_inline(struct btrfs_path *path,
read_extent_buffer(leaf, tmp, ptr, inline_size);
max_size = min_t(unsigned long, PAGE_SIZE, max_size);
- ret = btrfs_decompress(compress_type, tmp, page, 0, inline_size, max_size);
+ ret = btrfs_decompress(compress_type, tmp, &folio->page, 0, inline_size,
+ max_size);
/*
* decompression code contains a memset to fill in any space between the end
@@ -6739,7 +6740,7 @@ static noinline int uncompress_inline(struct btrfs_path *path,
*/
if (max_size < PAGE_SIZE)
- memzero_page(page, max_size, PAGE_SIZE - max_size);
+ folio_zero_range(folio, max_size, PAGE_SIZE - max_size);
kfree(tmp);
return ret;
}
@@ -6759,7 +6760,7 @@ static int read_inline_extent(struct btrfs_inode *inode, struct btrfs_path *path
fi = btrfs_item_ptr(path->nodes[0], path->slots[0],
struct btrfs_file_extent_item);
if (btrfs_file_extent_compression(path->nodes[0], fi) != BTRFS_COMPRESS_NONE)
- return uncompress_inline(path, page, fi);
+ return uncompress_inline(path, page_folio(page), fi);
copy_size = min_t(u64, PAGE_SIZE,
btrfs_file_extent_ram_bytes(path->nodes[0], fi));
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 39/46] btrfs: convert read_inline_extent to use a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (37 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 38/46] btrfs: convert uncompress_inline to take " Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 40/46] btrfs: convert btrfs_get_extent to take " Josef Bacik
` (8 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
Instead of using a page, use a folio instead, take a folio as an
argument, and update the callers appropriately.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/inode.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 560575a5de03..45835074aa6f 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -6746,30 +6746,30 @@ static noinline int uncompress_inline(struct btrfs_path *path,
}
static int read_inline_extent(struct btrfs_inode *inode, struct btrfs_path *path,
- struct page *page)
+ struct folio *folio)
{
struct btrfs_file_extent_item *fi;
void *kaddr;
size_t copy_size;
- if (!page || PageUptodate(page))
+ if (!folio || folio_test_uptodate(folio))
return 0;
- ASSERT(page_offset(page) == 0);
+ ASSERT(folio_pos(folio) == 0);
fi = btrfs_item_ptr(path->nodes[0], path->slots[0],
struct btrfs_file_extent_item);
if (btrfs_file_extent_compression(path->nodes[0], fi) != BTRFS_COMPRESS_NONE)
- return uncompress_inline(path, page_folio(page), fi);
+ return uncompress_inline(path, folio, fi);
copy_size = min_t(u64, PAGE_SIZE,
btrfs_file_extent_ram_bytes(path->nodes[0], fi));
- kaddr = kmap_local_page(page);
+ kaddr = kmap_local_folio(folio, 0);
read_extent_buffer(path->nodes[0], kaddr,
btrfs_file_extent_inline_start(fi), copy_size);
kunmap_local(kaddr);
if (copy_size < PAGE_SIZE)
- memzero_page(page, copy_size, PAGE_SIZE - copy_size);
+ folio_zero_range(folio, copy_size, PAGE_SIZE - copy_size);
return 0;
}
@@ -6944,7 +6944,7 @@ struct extent_map *btrfs_get_extent(struct btrfs_inode *inode,
ASSERT(em->disk_bytenr == EXTENT_MAP_INLINE);
ASSERT(em->len == fs_info->sectorsize);
- ret = read_inline_extent(inode, path, page);
+ ret = read_inline_extent(inode, path, page_folio(page));
if (ret < 0)
goto out;
goto insert;
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 40/46] btrfs: convert btrfs_get_extent to take a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (38 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 39/46] btrfs: convert read_inline_extent to use " Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 41/46] btrfs: convert __get_extent_map " Josef Bacik
` (7 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
We only pass this into read_inline_extent, change it to take a folio and
update the callers.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/btrfs_inode.h | 2 +-
fs/btrfs/extent_io.c | 2 +-
fs/btrfs/inode.c | 6 +++---
3 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/fs/btrfs/btrfs_inode.h b/fs/btrfs/btrfs_inode.h
index fc60c0cde479..2d7f8da54d8a 100644
--- a/fs/btrfs/btrfs_inode.h
+++ b/fs/btrfs/btrfs_inode.h
@@ -578,7 +578,7 @@ struct inode *btrfs_iget_path(u64 ino, struct btrfs_root *root,
struct btrfs_path *path);
struct inode *btrfs_iget(u64 ino, struct btrfs_root *root);
struct extent_map *btrfs_get_extent(struct btrfs_inode *inode,
- struct page *page, u64 start, u64 len);
+ struct folio *folio, u64 start, u64 len);
int btrfs_update_inode(struct btrfs_trans_handle *trans,
struct btrfs_inode *inode);
int btrfs_update_inode_fallback(struct btrfs_trans_handle *trans,
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index ab5715de5f40..2a80dfbc8248 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -987,7 +987,7 @@ static struct extent_map *__get_extent_map(struct inode *inode, struct page *pag
*em_cached = NULL;
}
- em = btrfs_get_extent(BTRFS_I(inode), page, start, len);
+ em = btrfs_get_extent(BTRFS_I(inode), page_folio(page), start, len);
if (!IS_ERR(em)) {
BUG_ON(*em_cached);
refcount_inc(&em->refs);
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 45835074aa6f..0cdb0b86e670 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -6791,7 +6791,7 @@ static int read_inline_extent(struct btrfs_inode *inode, struct btrfs_path *path
* Return: ERR_PTR on error, non-NULL extent_map on success.
*/
struct extent_map *btrfs_get_extent(struct btrfs_inode *inode,
- struct page *page, u64 start, u64 len)
+ struct folio *folio, u64 start, u64 len)
{
struct btrfs_fs_info *fs_info = inode->root->fs_info;
int ret = 0;
@@ -6814,7 +6814,7 @@ struct extent_map *btrfs_get_extent(struct btrfs_inode *inode,
if (em) {
if (em->start > start || em->start + em->len <= start)
free_extent_map(em);
- else if (em->disk_bytenr == EXTENT_MAP_INLINE && page)
+ else if (em->disk_bytenr == EXTENT_MAP_INLINE && folio)
free_extent_map(em);
else
goto out;
@@ -6944,7 +6944,7 @@ struct extent_map *btrfs_get_extent(struct btrfs_inode *inode,
ASSERT(em->disk_bytenr == EXTENT_MAP_INLINE);
ASSERT(em->len == fs_info->sectorsize);
- ret = read_inline_extent(inode, path, page_folio(page));
+ ret = read_inline_extent(inode, path, folio);
if (ret < 0)
goto out;
goto insert;
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 41/46] btrfs: convert __get_extent_map to take a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (39 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 40/46] btrfs: convert btrfs_get_extent to take " Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 42/46] btrfs: convert find_next_dirty_byte " Josef Bacik
` (6 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
Now that btrfs_get_extent takes a folio, update __get_extent_map to
take a folio as well.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/extent_io.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 2a80dfbc8248..4e9f0baba2ca 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -968,8 +968,9 @@ void clear_page_extent_mapped(struct page *page)
folio_detach_private(folio);
}
-static struct extent_map *__get_extent_map(struct inode *inode, struct page *page,
- u64 start, u64 len, struct extent_map **em_cached)
+static struct extent_map *__get_extent_map(struct inode *inode,
+ struct folio *folio, u64 start,
+ u64 len, struct extent_map **em_cached)
{
struct extent_map *em;
@@ -987,7 +988,7 @@ static struct extent_map *__get_extent_map(struct inode *inode, struct page *pag
*em_cached = NULL;
}
- em = btrfs_get_extent(BTRFS_I(inode), page_folio(page), start, len);
+ em = btrfs_get_extent(BTRFS_I(inode), folio, start, len);
if (!IS_ERR(em)) {
BUG_ON(*em_cached);
refcount_inc(&em->refs);
@@ -1050,8 +1051,8 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
end_folio_read(folio, true, cur, iosize);
break;
}
- em = __get_extent_map(inode, folio_page(folio, 0), cur,
- end - cur + 1, em_cached);
+ em = __get_extent_map(inode, folio, cur, end - cur + 1,
+ em_cached);
if (IS_ERR(em)) {
unlock_extent(tree, cur, end, NULL);
end_folio_read(folio, false, cur, end + 1 - cur);
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 42/46] btrfs: convert find_next_dirty_byte to take a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (40 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 41/46] btrfs: convert __get_extent_map " Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 43/46] btrfs: convert wait_subpage_spinlock to only use " Josef Bacik
` (5 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
We already use a folio some in this function, replace all page usage
with the folio and update the function to take the folio as an argument.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/extent_io.c | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 4e9f0baba2ca..040c92541bc9 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -1348,9 +1348,8 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
* If no dirty range is found, @start will be page_offset(page) + PAGE_SIZE.
*/
static void find_next_dirty_byte(const struct btrfs_fs_info *fs_info,
- struct page *page, u64 *start, u64 *end)
+ struct folio *folio, u64 *start, u64 *end)
{
- struct folio *folio = page_folio(page);
struct btrfs_subpage *subpage = folio_get_private(folio);
struct btrfs_subpage_info *spi = fs_info->subpage_info;
u64 orig_start = *start;
@@ -1363,14 +1362,15 @@ static void find_next_dirty_byte(const struct btrfs_fs_info *fs_info,
* For regular sector size == page size case, since one page only
* contains one sector, we return the page offset directly.
*/
- if (!btrfs_is_subpage(fs_info, page->mapping)) {
- *start = page_offset(page);
- *end = page_offset(page) + PAGE_SIZE;
+ if (!btrfs_is_subpage(fs_info, folio->mapping)) {
+ *start = folio_pos(folio);
+ *end = folio_pos(folio) + folio_size(folio);
return;
}
range_start_bit = spi->dirty_offset +
- (offset_in_page(orig_start) >> fs_info->sectorsize_bits);
+ (offset_in_folio(folio, orig_start) >>
+ fs_info->sectorsize_bits);
/* We should have the page locked, but just in case */
spin_lock_irqsave(&subpage->lock, flags);
@@ -1381,8 +1381,8 @@ static void find_next_dirty_byte(const struct btrfs_fs_info *fs_info,
range_start_bit -= spi->dirty_offset;
range_end_bit -= spi->dirty_offset;
- *start = page_offset(page) + range_start_bit * fs_info->sectorsize;
- *end = page_offset(page) + range_end_bit * fs_info->sectorsize;
+ *start = folio_pos(folio) + range_start_bit * fs_info->sectorsize;
+ *end = folio_pos(folio) + range_end_bit * fs_info->sectorsize;
}
/*
@@ -1443,7 +1443,7 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode,
break;
}
- find_next_dirty_byte(fs_info, &folio->page, &dirty_range_start,
+ find_next_dirty_byte(fs_info, folio, &dirty_range_start,
&dirty_range_end);
if (cur < dirty_range_start) {
cur = dirty_range_start;
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 43/46] btrfs: convert wait_subpage_spinlock to only use a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (41 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 42/46] btrfs: convert find_next_dirty_byte " Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 44/46] btrfs: convert btrfs_set_range_writeback to " Josef Bacik
` (4 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
Currently this already uses a folio for most things, update it to take a
folio and update all the page usage with the corresponding folio usage.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/inode.c | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 0cdb0b86e670..80022a8c718e 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -7186,13 +7186,12 @@ struct extent_map *btrfs_create_io_em(struct btrfs_inode *inode, u64 start,
* for subpage spinlock. So this function is to spin and wait for subpage
* spinlock.
*/
-static void wait_subpage_spinlock(struct page *page)
+static void wait_subpage_spinlock(struct folio *folio)
{
- struct btrfs_fs_info *fs_info = page_to_fs_info(page);
- struct folio *folio = page_folio(page);
+ struct btrfs_fs_info *fs_info = folio_to_fs_info(folio);
struct btrfs_subpage *subpage;
- if (!btrfs_is_subpage(fs_info, page->mapping))
+ if (!btrfs_is_subpage(fs_info, folio->mapping))
return;
ASSERT(folio_test_private(folio) && folio_get_private(folio));
@@ -7221,7 +7220,7 @@ static int btrfs_launder_folio(struct folio *folio)
static bool __btrfs_release_folio(struct folio *folio, gfp_t gfp_flags)
{
if (try_release_extent_mapping(&folio->page, gfp_flags)) {
- wait_subpage_spinlock(&folio->page);
+ wait_subpage_spinlock(folio);
clear_page_extent_mapped(&folio->page);
return true;
}
@@ -7282,7 +7281,7 @@ static void btrfs_invalidate_folio(struct folio *folio, size_t offset,
* do double ordered extent accounting on the same folio.
*/
folio_wait_writeback(folio);
- wait_subpage_spinlock(&folio->page);
+ wait_subpage_spinlock(folio);
/*
* For subpage case, we have call sites like
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 44/46] btrfs: convert btrfs_set_range_writeback to use a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (42 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 43/46] btrfs: convert wait_subpage_spinlock to only use " Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 45/46] btrfs: convert insert_inline_extent " Josef Bacik
` (3 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
We already use a lot of functions here that use folios, update the
function to use __filemap_get_folio instead of find_get_page and then
use the folio directly.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/inode.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 80022a8c718e..2f14b337a7ef 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -8956,19 +8956,19 @@ void btrfs_set_range_writeback(struct btrfs_inode *inode, u64 start, u64 end)
struct btrfs_fs_info *fs_info = inode->root->fs_info;
unsigned long index = start >> PAGE_SHIFT;
unsigned long end_index = end >> PAGE_SHIFT;
- struct page *page;
+ struct folio *folio;
u32 len;
ASSERT(end + 1 - start <= U32_MAX);
len = end + 1 - start;
while (index <= end_index) {
- page = find_get_page(inode->vfs_inode.i_mapping, index);
- ASSERT(page); /* Pages should be in the extent_io_tree */
+ folio = __filemap_get_folio(inode->vfs_inode.i_mapping, index, 0, 0);
+ ASSERT(!IS_ERR(folio)); /* folios should be in the extent_io_tree */
/* This is for data, which doesn't yet support larger folio. */
- ASSERT(folio_order(page_folio(page)) == 0);
- btrfs_folio_set_writeback(fs_info, page_folio(page), start, len);
- put_page(page);
+ ASSERT(folio_order(folio) == 0);
+ btrfs_folio_set_writeback(fs_info, folio, start, len);
+ folio_put(folio);
index++;
}
}
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 45/46] btrfs: convert insert_inline_extent to use a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (43 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 44/46] btrfs: convert btrfs_set_range_writeback to " Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 19:36 ` [PATCH 46/46] btrfs: convert extent_range_clear_dirty_for_io " Josef Bacik
` (2 subsequent siblings)
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
We only use a page to copy in the data for the inline extent. Use a
folio for this instead.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/inode.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 2f14b337a7ef..c019beb7d9ef 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -495,7 +495,6 @@ static int insert_inline_extent(struct btrfs_trans_handle *trans,
{
struct btrfs_root *root = inode->root;
struct extent_buffer *leaf;
- struct page *page = NULL;
const u32 sectorsize = trans->fs_info->sectorsize;
char *kaddr;
unsigned long ptr;
@@ -555,12 +554,16 @@ static int insert_inline_extent(struct btrfs_trans_handle *trans,
btrfs_set_file_extent_compression(leaf, ei,
compress_type);
} else {
- page = find_get_page(inode->vfs_inode.i_mapping, 0);
+ struct folio *folio;
+
+ folio = __filemap_get_folio(inode->vfs_inode.i_mapping,
+ 0, 0, 0);
+ ASSERT(!IS_ERR(folio));
btrfs_set_file_extent_compression(leaf, ei, 0);
- kaddr = kmap_local_page(page);
+ kaddr = kmap_local_folio(folio, 0);
write_extent_buffer(leaf, kaddr, ptr, size);
kunmap_local(kaddr);
- put_page(page);
+ folio_put(folio);
}
btrfs_mark_buffer_dirty(trans, leaf);
btrfs_release_path(path);
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* [PATCH 46/46] btrfs: convert extent_range_clear_dirty_for_io to use a folio
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (44 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 45/46] btrfs: convert insert_inline_extent " Josef Bacik
@ 2024-07-26 19:36 ` Josef Bacik
2024-07-26 22:57 ` [PATCH 00/46] btrfs: convert most of the data path to use folios Qu Wenruo
2024-07-29 20:32 ` David Sterba
47 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-26 19:36 UTC (permalink / raw)
To: linux-btrfs, kernel-team
Instead of getting a page and using that to clear dirty for io, use the
folio helper and use the appropriate folio functions.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
fs/btrfs/inode.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index c019beb7d9ef..79888ae8d883 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -877,19 +877,19 @@ static inline void inode_should_defrag(struct btrfs_inode *inode,
static int extent_range_clear_dirty_for_io(struct inode *inode, u64 start, u64 end)
{
unsigned long end_index = end >> PAGE_SHIFT;
- struct page *page;
+ struct folio *folio;
int ret = 0;
for (unsigned long index = start >> PAGE_SHIFT;
index <= end_index; index++) {
- page = find_get_page(inode->i_mapping, index);
- if (unlikely(!page)) {
+ folio = __filemap_get_folio(inode->i_mapping, index, 0, 0);
+ if (unlikely(IS_ERR(folio))) {
if (!ret)
- ret = -ENOENT;
+ ret = PTR_ERR(folio);
continue;
}
- clear_page_dirty_for_io(page);
- put_page(page);
+ folio_clear_dirty_for_io(folio);
+ folio_put(folio);
}
return ret;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 51+ messages in thread* Re: [PATCH 00/46] btrfs: convert most of the data path to use folios
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (45 preceding siblings ...)
2024-07-26 19:36 ` [PATCH 46/46] btrfs: convert extent_range_clear_dirty_for_io " Josef Bacik
@ 2024-07-26 22:57 ` Qu Wenruo
2024-07-27 0:55 ` Neal Gompa
2024-07-29 20:32 ` David Sterba
47 siblings, 1 reply; 51+ messages in thread
From: Qu Wenruo @ 2024-07-26 22:57 UTC (permalink / raw)
To: Josef Bacik, linux-btrfs, kernel-team
在 2024/7/27 05:05, Josef Bacik 写道:
> Hello,
>
> Willy indicated that he wants to get rid of page->index in the next merge
> window, so I went to look at what that would entail for btrfs, and I got a
> little carried away.
>
> This patch series does in fact accomplish that, but it takes almost the entirety
> of the data write path and makes it work with only folios. I was going to
> convert everything, but there's some weird gaps that need to be handled in their
> own chunk.
>
> 1. Scrub. We're still passing around page pointers. Not a huge deal, it was
> just another 10ish patches just for that work, so I decided against it.
>
> 2. Buffered writes. Again, I did most of this work and it wasn't bad, but then
> I realized that the free space cache uses some of this code, and I really
> don't want to convert that code, I want to delete it, so I'll do that first.
Totally agree, v1 is better to be deprecated.
>
> 3. Metadata. Qu has been doing this consistently and I didn't want to get in
> the way of his work so I just left most of that.
I guess there are still metadata codes switching between page and folios.
I'm totally fine if you feel like to convert them to use folios.
The only focus for me is to enable larger folios.
So the conversion part is totally fine.
>
> This has run through the CI and didn't cause any issues. I've made everything
> as easy to review as possible and as small as possible. My eyes started to
> glaze over a little bit with the changelogs, so let me know if there's anything
> you want changed. Thanks,
Just give us some time to review the whole series though, the pure
amount of patches is already making my eyes glazing.
Thanks,
Qu
>
> Josef
>
> Josef Bacik (46):
> btrfs: convert btrfs_readahead to only use folio
> btrfs: convert btrfs_read_folio to only use a folio
> btrfs: convert end_page_read to take a folio
> btrfs: convert begin_page_folio to take a folio instead
> btrfs: convert submit_extent_page to use a folio
> btrfs: convert btrfs_do_readpage to only use a folio
> btrfs: update the writepage tracepoint to take a folio
> btrfs: convert __extent_writepage_io to take a folio
> btrfs: convert extent_write_locked_range to use folios
> btrfs: convert __extent_writepage to be completely folio based
> btrfs: convert add_ra_bio_pages to use only folios
> btrfs: utilize folio more in btrfs_page_mkwrite
> btrfs: convert can_finish_ordered_extent to use a folio
> btrfs: convert btrfs_finish_ordered_extent to take a folio
> btrfs: convert btrfs_mark_ordered_io_finished to take a folio
> btrfs: convert writepage_delalloc to take a folio
> btrfs: convert find_lock_delalloc_range to use a folio
> btrfs: convert lock_delalloc_pages to take a folio
> btrfs: convert __unlock_for_delalloc to take a folio
> btrfs: convert __process_pages_contig to take a folio
> btrfs: convert process_one_page to operate only on folios
> btrfs: convert extent_clear_unlock_delalloc to take a folio
> btrfs: convert extent_write_locked_range to take a folio
> btrfs: convert run_delalloc_cow to take a folio
> btrfs: convert cow_file_range_inline to take a folio
> btrfs: convert cow_file_range to take a folio
> btrfs: convert fallback_to_cow to take a folio
> btrfs: convert run_delalloc_nocow to take a folio
> btrfs: convert btrfs_cleanup_ordered_extents to use folios
> btrfs: convert btrfs_cleanup_ordered_extents to take a folio
> btrfs: convert run_delalloc_compressed to take a folio
> btrfs: convert btrfs_run_delalloc_range to take a folio
> btrfs: convert async_chunk to hold a folio
> btrfs: convert submit_uncompressed_range to take a folio
> btrfs: convert btrfs_writepage_fixup_worker to use a folio
> btrfs: convert btrfs_writepage_cow_fixup to use folio
> btrfs: convert btrfs_writepage_fixup to use a folio
> btrfs: convert uncompress_inline to take a folio
> btrfs: convert read_inline_extent to use a folio
> btrfs: convert btrfs_get_extent to take a folio
> btrfs: convert __get_extent_map to take a folio
> btrfs: convert find_next_dirty_byte to take a folio
> btrfs: convert wait_subpage_spinlock to only use a folio
> btrfs: convert btrfs_set_range_writeback to use a folio
> btrfs: convert insert_inline_extent to use a folio
> btrfs: convert extent_range_clear_dirty_for_io to use a folio
>
> fs/btrfs/btrfs_inode.h | 6 +-
> fs/btrfs/compression.c | 62 +++--
> fs/btrfs/extent_io.c | 436 +++++++++++++++----------------
> fs/btrfs/extent_io.h | 6 +-
> fs/btrfs/file.c | 24 +-
> fs/btrfs/inode.c | 342 ++++++++++++------------
> fs/btrfs/ordered-data.c | 28 +-
> fs/btrfs/ordered-data.h | 6 +-
> fs/btrfs/tests/extent-io-tests.c | 10 +-
> include/trace/events/btrfs.h | 10 +-
> 10 files changed, 467 insertions(+), 463 deletions(-)
>
^ permalink raw reply [flat|nested] 51+ messages in thread* Re: [PATCH 00/46] btrfs: convert most of the data path to use folios
2024-07-26 22:57 ` [PATCH 00/46] btrfs: convert most of the data path to use folios Qu Wenruo
@ 2024-07-27 0:55 ` Neal Gompa
2024-07-29 14:43 ` Josef Bacik
0 siblings, 1 reply; 51+ messages in thread
From: Neal Gompa @ 2024-07-27 0:55 UTC (permalink / raw)
To: Qu Wenruo; +Cc: Josef Bacik, linux-btrfs, kernel-team
On Fri, Jul 26, 2024 at 6:58 PM Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>
>
>
> 在 2024/7/27 05:05, Josef Bacik 写道:
> > Hello,
> >
> > Willy indicated that he wants to get rid of page->index in the next merge
> > window, so I went to look at what that would entail for btrfs, and I got a
> > little carried away.
> >
> > This patch series does in fact accomplish that, but it takes almost the entirety
> > of the data write path and makes it work with only folios. I was going to
> > convert everything, but there's some weird gaps that need to be handled in their
> > own chunk.
> >
> > 1. Scrub. We're still passing around page pointers. Not a huge deal, it was
> > just another 10ish patches just for that work, so I decided against it.
> >
> > 2. Buffered writes. Again, I did most of this work and it wasn't bad, but then
> > I realized that the free space cache uses some of this code, and I really
> > don't want to convert that code, I want to delete it, so I'll do that first.
>
> Totally agree, v1 is better to be deprecated.
>
Didn't we already deprecate it? We should just announce the removal schedule.
> >
> > 3. Metadata. Qu has been doing this consistently and I didn't want to get in
> > the way of his work so I just left most of that.
>
> I guess there are still metadata codes switching between page and folios.
>
> I'm totally fine if you feel like to convert them to use folios.
> The only focus for me is to enable larger folios.
> So the conversion part is totally fine.
>
> >
> > This has run through the CI and didn't cause any issues. I've made everything
> > as easy to review as possible and as small as possible. My eyes started to
> > glaze over a little bit with the changelogs, so let me know if there's anything
> > you want changed. Thanks,
>
> Just give us some time to review the whole series though, the pure
> amount of patches is already making my eyes glazing.
>
I'm impressed, but my eyes are glazing over reading it patch by patch
through emails, do you happen to have a branch on GitHub/GitLab/etc.
that I could look at it through instead?
--
真実はいつも一つ!/ Always, there's only one truth!
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 00/46] btrfs: convert most of the data path to use folios
2024-07-27 0:55 ` Neal Gompa
@ 2024-07-29 14:43 ` Josef Bacik
0 siblings, 0 replies; 51+ messages in thread
From: Josef Bacik @ 2024-07-29 14:43 UTC (permalink / raw)
To: Neal Gompa; +Cc: Qu Wenruo, linux-btrfs, kernel-team
On Fri, Jul 26, 2024 at 08:55:54PM -0400, Neal Gompa wrote:
> On Fri, Jul 26, 2024 at 6:58 PM Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
> >
> >
> >
> > 在 2024/7/27 05:05, Josef Bacik 写道:
> > > Hello,
> > >
> > > Willy indicated that he wants to get rid of page->index in the next merge
> > > window, so I went to look at what that would entail for btrfs, and I got a
> > > little carried away.
> > >
> > > This patch series does in fact accomplish that, but it takes almost the entirety
> > > of the data write path and makes it work with only folios. I was going to
> > > convert everything, but there's some weird gaps that need to be handled in their
> > > own chunk.
> > >
> > > 1. Scrub. We're still passing around page pointers. Not a huge deal, it was
> > > just another 10ish patches just for that work, so I decided against it.
> > >
> > > 2. Buffered writes. Again, I did most of this work and it wasn't bad, but then
> > > I realized that the free space cache uses some of this code, and I really
> > > don't want to convert that code, I want to delete it, so I'll do that first.
> >
> > Totally agree, v1 is better to be deprecated.
> >
>
> Didn't we already deprecate it? We should just announce the removal schedule.
>
> > >
> > > 3. Metadata. Qu has been doing this consistently and I didn't want to get in
> > > the way of his work so I just left most of that.
> >
> > I guess there are still metadata codes switching between page and folios.
> >
> > I'm totally fine if you feel like to convert them to use folios.
> > The only focus for me is to enable larger folios.
> > So the conversion part is totally fine.
> >
> > >
> > > This has run through the CI and didn't cause any issues. I've made everything
> > > as easy to review as possible and as small as possible. My eyes started to
> > > glaze over a little bit with the changelogs, so let me know if there's anything
> > > you want changed. Thanks,
> >
> > Just give us some time to review the whole series though, the pure
> > amount of patches is already making my eyes glazing.
> >
>
> I'm impressed, but my eyes are glazing over reading it patch by patch
> through emails, do you happen to have a branch on GitHub/GitLab/etc.
> that I could look at it through instead?
Yup it's here
https://github.com/josefbacik/linux/tree/btrfs-convert-readahead
Thanks,
Josef
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH 00/46] btrfs: convert most of the data path to use folios
2024-07-26 19:35 [PATCH 00/46] btrfs: convert most of the data path to use folios Josef Bacik
` (46 preceding siblings ...)
2024-07-26 22:57 ` [PATCH 00/46] btrfs: convert most of the data path to use folios Qu Wenruo
@ 2024-07-29 20:32 ` David Sterba
47 siblings, 0 replies; 51+ messages in thread
From: David Sterba @ 2024-07-29 20:32 UTC (permalink / raw)
To: Josef Bacik; +Cc: linux-btrfs, kernel-team
On Fri, Jul 26, 2024 at 03:35:47PM -0400, Josef Bacik wrote:
> Hello,
>
> Willy indicated that he wants to get rid of page->index in the next merge
> window, so I went to look at what that would entail for btrfs, and I got a
> little carried away.
>
> This patch series does in fact accomplish that, but it takes almost the entirety
> of the data write path and makes it work with only folios. I was going to
> convert everything, but there's some weird gaps that need to be handled in their
> own chunk.
>
> 1. Scrub. We're still passing around page pointers. Not a huge deal, it was
> just another 10ish patches just for that work, so I decided against it.
>
> 2. Buffered writes. Again, I did most of this work and it wasn't bad, but then
> I realized that the free space cache uses some of this code, and I really
> don't want to convert that code, I want to delete it, so I'll do that first.
>
> 3. Metadata. Qu has been doing this consistently and I didn't want to get in
> the way of his work so I just left most of that.
>
> This has run through the CI and didn't cause any issues. I've made everything
> as easy to review as possible and as small as possible. My eyes started to
> glaze over a little bit with the changelogs, so let me know if there's anything
> you want changed. Thanks,
I did two passes, most of the conversions are straightforward, the API
changes seem OK. There are some local variable referring to page, like
page_start but initialized from folios. Not a big problem for now, we'll
keep removing references to pages, this can be done later.
Reviewed-by: David Sterba <dsterba@suse.com>
^ permalink raw reply [flat|nested] 51+ messages in thread