From: Qu Wenruo <wqu@suse.com>
To: "Matthew Wilcox (Oracle)" <willy@infradead.org>,
Qu Wenruo <quwenruo.btrfs@gmx.com>
Cc: Chris Mason <clm@fb.com>, Josef Bacik <josef@toxicpanda.com>,
David Sterba <dsterba@suse.com>,
linux-btrfs@vger.kernel.org
Subject: Re: [PATCH 2/2] btrfs: Use a folio array throughout the defrag process
Date: Sat, 9 Dec 2023 07:44:34 +1030 [thread overview]
Message-ID: <226b0c8d-3d6f-46a1-992c-874ed0c667fb@suse.com> (raw)
In-Reply-To: <20231208192725.1523194-2-willy@infradead.org>
[-- Attachment #1.1.1: Type: text/plain, Size: 6207 bytes --]
On 2023/12/9 05:57, Matthew Wilcox (Oracle) wrote:
> Remove more hidden calls to compound_head() by using an array of folios
> instead of pages. Also neaten the error path in defrag_one_range() by
> adjusting the length of the array instead of checking for NULL.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Looks good to me.
Reviewed-by: Qu Wenruo <wqu@suse.com>
This time only some unrelated questions below.
> ---
> fs/btrfs/defrag.c | 44 +++++++++++++++++++++-----------------------
> 1 file changed, 21 insertions(+), 23 deletions(-)
>
> diff --git a/fs/btrfs/defrag.c b/fs/btrfs/defrag.c
> index 17a13d3ed131..4b94362779fe 100644
> --- a/fs/btrfs/defrag.c
> +++ b/fs/btrfs/defrag.c
> @@ -861,7 +861,7 @@ static bool defrag_check_next_extent(struct inode *inode, struct extent_map *em,
> * NOTE: Caller should also wait for page writeback after the cluster is
> * prepared, here we don't do writeback wait for each page.
> */
> -static struct page *defrag_prepare_one_page(struct btrfs_inode *inode, pgoff_t index)
> +static struct folio *defrag_prepare_one_folio(struct btrfs_inode *inode, pgoff_t index)
> {
> struct address_space *mapping = inode->vfs_inode.i_mapping;
> gfp_t mask = btrfs_alloc_write_mask(mapping);
> @@ -875,7 +875,7 @@ static struct page *defrag_prepare_one_page(struct btrfs_inode *inode, pgoff_t i
> folio = __filemap_get_folio(mapping, index,
> FGP_LOCK | FGP_ACCESSED | FGP_CREAT, mask);
> if (IS_ERR(folio))
> - return &folio->page;
What's the proper way to grab the first page of a folio?
For now, I'm going folio_page() and it's the same wrapper using
folio->page, but I'm wondering would there be any recommendation for it.
> + return folio;
>
> /*
> * Since we can defragment files opened read-only, we can encounter
> @@ -942,7 +942,7 @@ static struct page *defrag_prepare_one_page(struct btrfs_inode *inode, pgoff_t i
> return ERR_PTR(-EIO);
> }
> }
> - return &folio->page;
> + return folio;
> }
>
> struct defrag_target_range {
> @@ -1163,7 +1163,7 @@ static_assert(PAGE_ALIGNED(CLUSTER_SIZE));
> */
> static int defrag_one_locked_target(struct btrfs_inode *inode,
> struct defrag_target_range *target,
> - struct page **pages, int nr_pages,
> + struct folio **folios, int nr_pages,
> struct extent_state **cached_state)
> {
> struct btrfs_fs_info *fs_info = inode->root->fs_info;
> @@ -1172,7 +1172,7 @@ static int defrag_one_locked_target(struct btrfs_inode *inode,
> const u64 len = target->len;
> unsigned long last_index = (start + len - 1) >> PAGE_SHIFT;
> unsigned long start_index = start >> PAGE_SHIFT;
> - unsigned long first_index = page_index(pages[0]);
> + unsigned long first_index = folios[0]->index;
The same for index, there is a folio_index() wrapper.
So should we go the wrapper or would the wrapper be gone in the future?
> int ret = 0;
> int i;
>
> @@ -1189,8 +1189,8 @@ static int defrag_one_locked_target(struct btrfs_inode *inode,
>
> /* Update the page status */
> for (i = start_index - first_index; i <= last_index - first_index; i++) {
> - ClearPageChecked(pages[i]);
> - btrfs_page_clamp_set_dirty(fs_info, pages[i], start, len);
> + folio_clear_checked(folios[i]);
> + btrfs_page_clamp_set_dirty(fs_info, &folios[i]->page, start, len);
After my patch "2/3 btrfs: migrate subpage code to folio interfaces",
btrfs_page_*() helpers accept folio parameter directly, thus this may
lead to a conflicts.
Thanks,
Qu
> }
> btrfs_delalloc_release_extents(inode, len);
> extent_changeset_free(data_reserved);
> @@ -1206,7 +1206,7 @@ static int defrag_one_range(struct btrfs_inode *inode, u64 start, u32 len,
> struct defrag_target_range *entry;
> struct defrag_target_range *tmp;
> LIST_HEAD(target_list);
> - struct page **pages;
> + struct folio **folios;
> const u32 sectorsize = inode->root->fs_info->sectorsize;
> u64 last_index = (start + len - 1) >> PAGE_SHIFT;
> u64 start_index = start >> PAGE_SHIFT;
> @@ -1217,21 +1217,21 @@ static int defrag_one_range(struct btrfs_inode *inode, u64 start, u32 len,
> ASSERT(nr_pages <= CLUSTER_SIZE / PAGE_SIZE);
> ASSERT(IS_ALIGNED(start, sectorsize) && IS_ALIGNED(len, sectorsize));
>
> - pages = kcalloc(nr_pages, sizeof(struct page *), GFP_NOFS);
> - if (!pages)
> + folios = kcalloc(nr_pages, sizeof(struct folio *), GFP_NOFS);
> + if (!folios)
> return -ENOMEM;
>
> /* Prepare all pages */
> for (i = 0; i < nr_pages; i++) {
> - pages[i] = defrag_prepare_one_page(inode, start_index + i);
> - if (IS_ERR(pages[i])) {
> - ret = PTR_ERR(pages[i]);
> - pages[i] = NULL;
> - goto free_pages;
> + folios[i] = defrag_prepare_one_folio(inode, start_index + i);
> + if (IS_ERR(folios[i])) {
> + ret = PTR_ERR(folios[i]);
> + nr_pages = i;
> + goto free_folios;
> }
> }
> for (i = 0; i < nr_pages; i++)
> - wait_on_page_writeback(pages[i]);
> + folio_wait_writeback(folios[i]);
>
> /* Lock the pages range */
> lock_extent(&inode->io_tree, start_index << PAGE_SHIFT,
> @@ -1251,7 +1251,7 @@ static int defrag_one_range(struct btrfs_inode *inode, u64 start, u32 len,
> goto unlock_extent;
>
> list_for_each_entry(entry, &target_list, list) {
> - ret = defrag_one_locked_target(inode, entry, pages, nr_pages,
> + ret = defrag_one_locked_target(inode, entry, folios, nr_pages,
> &cached_state);
> if (ret < 0)
> break;
> @@ -1265,14 +1265,12 @@ static int defrag_one_range(struct btrfs_inode *inode, u64 start, u32 len,
> unlock_extent(&inode->io_tree, start_index << PAGE_SHIFT,
> (last_index << PAGE_SHIFT) + PAGE_SIZE - 1,
> &cached_state);
> -free_pages:
> +free_folios:
> for (i = 0; i < nr_pages; i++) {
> - if (pages[i]) {
> - unlock_page(pages[i]);
> - put_page(pages[i]);
> - }
> + folio_unlock(folios[i]);
> + folio_put(folios[i]);
> }
> - kfree(pages);
> + kfree(folios);
> return ret;
> }
>
[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 7027 bytes --]
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]
next prev parent reply other threads:[~2023-12-08 21:14 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-08 19:27 [PATCH 1/2] btrfs: Convert defrag_prepare_one_page() to use a folio Matthew Wilcox (Oracle)
2023-12-08 19:27 ` [PATCH 2/2] btrfs: Use a folio array throughout the defrag process Matthew Wilcox (Oracle)
2023-12-08 21:14 ` Qu Wenruo [this message]
2023-12-08 21:24 ` Matthew Wilcox
2023-12-08 21:30 ` Qu Wenruo
2023-12-08 20:56 ` [PATCH 1/2] btrfs: Convert defrag_prepare_one_page() to use a folio Qu Wenruo
2023-12-08 21:16 ` Matthew Wilcox
2023-12-08 21:24 ` Qu Wenruo
-- strict thread matches above, loose matches on Subject: below --
2023-08-14 17:03 Matthew Wilcox (Oracle)
2023-08-14 17:03 ` [PATCH 2/2] btrfs: Use a folio array throughout the defrag process Matthew Wilcox (Oracle)
2023-08-17 12:34 ` David Sterba
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=226b0c8d-3d6f-46a1-992c-874ed0c667fb@suse.com \
--to=wqu@suse.com \
--cc=clm@fb.com \
--cc=dsterba@suse.com \
--cc=josef@toxicpanda.com \
--cc=linux-btrfs@vger.kernel.org \
--cc=quwenruo.btrfs@gmx.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox