From: Qu Wenruo <wqu@suse.com>
To: linux-btrfs@vger.kernel.org
Subject: [PATCH 5/7] btrfs: defrag: introduce a new helper to defrag one cluster
Date: Fri, 28 May 2021 10:28:19 +0800 [thread overview]
Message-ID: <20210528022821.81386-6-wqu@suse.com> (raw)
In-Reply-To: <20210528022821.81386-1-wqu@suse.com>
This new helper, defrag_one_cluster(), will defrag one cluster (at most
256K) by:
- Collect all targets
- Call defrag_one_target() on each target
With some extra range clamping.
- Update @sectors_defraged parameter
Signed-off-by: Qu Wenruo <wqu@suse.com>
---
fs/btrfs/ioctl.c | 42 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 42 insertions(+)
diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
index cd7650bcc70c..911db470aad6 100644
--- a/fs/btrfs/ioctl.c
+++ b/fs/btrfs/ioctl.c
@@ -1562,6 +1562,48 @@ static int defrag_one_target(struct btrfs_inode *inode,
return ret;
}
+static int defrag_one_cluster(struct btrfs_inode *inode,
+ struct file_ra_state *ra,
+ u64 start, u32 len, u32 extent_thresh,
+ u64 newer_than, bool do_compress,
+ unsigned long *sectors_defraged,
+ unsigned long max_sectors)
+{
+ const u32 sectorsize = inode->root->fs_info->sectorsize;
+ struct defrag_target_range *entry;
+ struct defrag_target_range *tmp;
+ LIST_HEAD(target_list);
+ int ret;
+
+ BUILD_BUG_ON(!IS_ALIGNED(CLUSTER_SIZE, PAGE_SIZE));
+ ret = defrag_collect_targets(inode, start, len, extent_thresh,
+ newer_than, do_compress, &target_list);
+ if (ret < 0)
+ goto out;
+
+ list_for_each_entry(entry, &target_list, list) {
+ u32 range_len = entry->len;
+
+ /* Reached the limit */
+ if (max_sectors && max_sectors == *sectors_defraged)
+ break;
+
+ if (max_sectors)
+ range_len = min_t(u32, range_len,
+ (max_sectors - *sectors_defraged) * sectorsize);
+ ret = defrag_one_target(inode, ra, entry->start, range_len);
+ if (ret < 0)
+ break;
+ *sectors_defraged += range_len;
+ }
+out:
+ list_for_each_entry_safe(entry, tmp, &target_list, list) {
+ list_del_init(&entry->list);
+ kfree(entry);
+ }
+ return ret;
+}
+
/*
* Btrfs entrace for defrag.
*
--
2.31.1
next prev parent reply other threads:[~2021-05-28 2:28 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-05-28 2:28 [PATCH 0/7] btrfs: defrag: rework to support sector perfect defrag Qu Wenruo
2021-05-28 2:28 ` [PATCH 1/7] btrfs: defrag: pass file_ra_state instead of file for btrfs_defrag_file() Qu Wenruo
2021-05-28 9:46 ` Johannes Thumshirn
2021-05-28 2:28 ` [PATCH 2/7] btrfs: defrag: extract the page preparation code into one helper Qu Wenruo
2021-05-28 10:23 ` Johannes Thumshirn
2021-05-28 10:36 ` Qu Wenruo
2021-05-28 10:38 ` Johannes Thumshirn
2021-05-28 2:28 ` [PATCH 3/7] btrfs: defrag: introduce a new helper to collect target file extents Qu Wenruo
2021-05-28 2:28 ` [PATCH 4/7] btrfs: defrag: introduce a helper to defrag a continuous range Qu Wenruo
2021-05-28 9:07 ` Filipe Manana
2021-05-28 10:27 ` Qu Wenruo
2021-05-28 2:28 ` Qu Wenruo [this message]
2021-05-28 2:28 ` [PATCH 6/7] btrfs: defrag: use defrag_one_cluster() to implement btrfs_defrag_file() Qu Wenruo
2021-05-28 2:28 ` [PATCH 7/7] btrfs: defrag: remove the old infrastructure Qu Wenruo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210528022821.81386-6-wqu@suse.com \
--to=wqu@suse.com \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).