From: Qu Wenruo <wqu@suse.com>
To: linux-btrfs@vger.kernel.org
Subject: [PATCH v3 15/26] btrfs: refactor submit_compressed_extents()
Date: Mon, 27 Sep 2021 15:21:57 +0800 [thread overview]
Message-ID: <20210927072208.21634-16-wqu@suse.com> (raw)
In-Reply-To: <20210927072208.21634-1-wqu@suse.com>
We have a big hunk of code inside a while() loop, with tons of strange
jump for error handling.
It's definitely not to the code standard of today.
Move the code into a new function, submit_one_async_extent().
Since we're here, also do the following modifications:
- Comment style change
To follow the current scheme
- Don't fallback to non-compressed write then hitting ENOSPC
If we hit ENOSPC for compressed write, how could we reserve more space
for non-compressed write?
Thus we go error path directly.
This removes the retry: label.
- Add more comment for super long parameter list
Explain which parameter is for, so we don't need to check the
prototype.
- Move the error handling to submit_one_async_extent()
Thus no strange code like:
out_free:
...
goto again;
Signed-off-by: Qu Wenruo <wqu@suse.com>
---
fs/btrfs/inode.c | 278 +++++++++++++++++++++++------------------------
1 file changed, 137 insertions(+), 141 deletions(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 796a850b9076..11e02b2300f5 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -839,163 +839,129 @@ static void free_async_extent_pages(struct async_extent *async_extent)
async_extent->pages = NULL;
}
-/*
- * phase two of compressed writeback. This is the ordered portion
- * of the code, which only gets called in the order the work was
- * queued. We walk all the async extents created by compress_file_range
- * and send them down to the disk.
- */
-static noinline void submit_compressed_extents(struct async_chunk *async_chunk)
+static int submit_one_async_extent(struct btrfs_inode *inode,
+ struct async_chunk *async_chunk,
+ struct async_extent *async_extent,
+ u64 *alloc_hint)
{
- struct btrfs_inode *inode = BTRFS_I(async_chunk->inode);
- struct btrfs_fs_info *fs_info = inode->root->fs_info;
- struct async_extent *async_extent;
- u64 alloc_hint = 0;
+ struct extent_io_tree *io_tree = &inode->io_tree;
+ struct btrfs_root *root = inode->root;
+ struct btrfs_fs_info *fs_info = root->fs_info;
struct btrfs_key ins;
struct extent_map *em;
- struct btrfs_root *root = inode->root;
- struct extent_io_tree *io_tree = &inode->io_tree;
int ret = 0;
+ u64 start = async_extent->start;
+ u64 end = async_extent->start + async_extent->ram_size - 1;
-again:
- while (!list_empty(&async_chunk->extents)) {
- async_extent = list_entry(async_chunk->extents.next,
- struct async_extent, list);
- list_del(&async_extent->list);
-
-retry:
- lock_extent(io_tree, async_extent->start,
- async_extent->start + async_extent->ram_size - 1);
- /* did the compression code fall back to uncompressed IO? */
- if (!async_extent->pages) {
- int page_started = 0;
- unsigned long nr_written = 0;
-
- /* allocate blocks */
- ret = cow_file_range(inode, async_chunk->locked_page,
- async_extent->start,
- async_extent->start +
- async_extent->ram_size - 1,
- &page_started, &nr_written, 0);
-
- /* JDM XXX */
+ lock_extent(io_tree, start, end);
- /*
- * if page_started, cow_file_range inserted an
- * inline extent and took care of all the unlocking
- * and IO for us. Otherwise, we need to submit
- * all those pages down to the drive.
- */
- if (!page_started && !ret)
- extent_write_locked_range(&inode->vfs_inode,
- async_extent->start,
- async_extent->start +
- async_extent->ram_size - 1,
- WB_SYNC_ALL);
- else if (ret && async_chunk->locked_page)
- unlock_page(async_chunk->locked_page);
- kfree(async_extent);
- cond_resched();
- continue;
- }
-
- ret = btrfs_reserve_extent(root, async_extent->ram_size,
- async_extent->compressed_size,
- async_extent->compressed_size,
- 0, alloc_hint, &ins, 1, 1);
- if (ret) {
- free_async_extent_pages(async_extent);
-
- if (ret == -ENOSPC) {
- unlock_extent(io_tree, async_extent->start,
- async_extent->start +
- async_extent->ram_size - 1);
-
- /*
- * we need to redirty the pages if we decide to
- * fallback to uncompressed IO, otherwise we
- * will not submit these pages down to lower
- * layers.
- */
- extent_range_redirty_for_io(&inode->vfs_inode,
- async_extent->start,
- async_extent->start +
- async_extent->ram_size - 1);
+ /* We have fall back to uncompressed write */
+ if (!async_extent->pages) {
+ int page_started = 0;
+ unsigned long nr_written = 0;
- goto retry;
- }
- goto out_free;
- }
/*
- * here we're doing allocation and writeback of the
- * compressed pages
+ * Call cow_file_range() to run the delalloc range directly,
+ * since we won't go to nocow or async path again.
*/
- em = create_io_em(inode, async_extent->start,
- async_extent->ram_size, /* len */
- async_extent->start, /* orig_start */
- ins.objectid, /* block_start */
- ins.offset, /* block_len */
- ins.offset, /* orig_block_len */
- async_extent->ram_size, /* ram_bytes */
- async_extent->compress_type,
- BTRFS_ORDERED_COMPRESSED);
- if (IS_ERR(em))
- /* ret value is not necessary due to void function */
- goto out_free_reserve;
- free_extent_map(em);
-
- ret = btrfs_add_ordered_extent_compress(inode,
- async_extent->start,
- ins.objectid,
- async_extent->ram_size,
- ins.offset,
- async_extent->compress_type);
- if (ret) {
- btrfs_drop_extent_cache(inode, async_extent->start,
- async_extent->start +
- async_extent->ram_size - 1, 0);
- goto out_free_reserve;
- }
- btrfs_dec_block_group_reservations(fs_info, ins.objectid);
-
+ ret = cow_file_range(inode, async_chunk->locked_page,
+ start, end, &page_started, &nr_written, 0);
/*
- * clear dirty, set writeback and unlock the pages.
+ * If @page_started, cow_file_range() inserted an
+ * inline extent and took care of all the unlocking
+ * and IO for us. Otherwise, we need to submit
+ * all those pages down to the drive.
*/
- extent_clear_unlock_delalloc(inode, async_extent->start,
- async_extent->start +
- async_extent->ram_size - 1,
- NULL, EXTENT_LOCKED | EXTENT_DELALLOC,
- PAGE_UNLOCK | PAGE_START_WRITEBACK);
- if (btrfs_submit_compressed_write(inode, async_extent->start,
- async_extent->ram_size,
- ins.objectid,
- ins.offset, async_extent->pages,
- async_extent->nr_pages,
- async_chunk->write_flags,
- async_chunk->blkcg_css)) {
- const u64 start = async_extent->start;
- const u64 end = start + async_extent->ram_size - 1;
-
- btrfs_writepage_endio_finish_ordered(inode, NULL, start,
- end, false);
-
- extent_clear_unlock_delalloc(inode, start, end, NULL, 0,
- PAGE_END_WRITEBACK |
- PAGE_SET_ERROR);
- free_async_extent_pages(async_extent);
- }
- alloc_hint = ins.objectid + ins.offset;
+ if (!page_started && !ret)
+ extent_write_locked_range(&inode->vfs_inode, start,
+ end, WB_SYNC_ALL);
+ else if (ret && async_chunk->locked_page)
+ unlock_page(async_chunk->locked_page);
kfree(async_extent);
- cond_resched();
+ return ret;
+ }
+
+ ret = btrfs_reserve_extent(root, async_extent->ram_size,
+ async_extent->compressed_size,
+ async_extent->compressed_size,
+ 0, *alloc_hint, &ins, 1, 1);
+ if (ret) {
+ free_async_extent_pages(async_extent);
+ /*
+ * Here we used to try again by going back to non-compressed
+ * path for ENOSPC.
+ * But we can't reserve space even for compressed size, how
+ * could it work for uncompressed size which requires larger
+ * size?
+ * So here we directly go error path.
+ */
+ goto out_free;
+ }
+
+ /*
+ * Here we're doing allocation and writeback of the
+ * compressed pages
+ */
+ em = create_io_em(inode, start,
+ async_extent->ram_size, /* len */
+ start, /* orig_start */
+ ins.objectid, /* block_start */
+ ins.offset, /* block_len */
+ ins.offset, /* orig_block_len */
+ async_extent->ram_size, /* ram_bytes */
+ async_extent->compress_type,
+ BTRFS_ORDERED_COMPRESSED);
+ if (IS_ERR(em)) {
+ ret = PTR_ERR(em);
+ goto out_free_reserve;
+ }
+ free_extent_map(em);
+
+ ret = btrfs_add_ordered_extent_compress(inode, start, /* file_offset */
+ ins.objectid, /* disk_bytenr */
+ async_extent->ram_size, /* num_bytes */
+ ins.offset, /* disk_num_bytes */
+ async_extent->compress_type);
+ if (ret) {
+ btrfs_drop_extent_cache(inode, start, end, 0);
+ goto out_free_reserve;
}
- return;
+ btrfs_dec_block_group_reservations(fs_info, ins.objectid);
+
+ /*
+ * clear dirty, set writeback and unlock the pages.
+ */
+ extent_clear_unlock_delalloc(inode, start, end,
+ NULL, EXTENT_LOCKED | EXTENT_DELALLOC,
+ PAGE_UNLOCK | PAGE_START_WRITEBACK);
+ if (btrfs_submit_compressed_write(inode, start, /* file_offset */
+ async_extent->ram_size, /* num_bytes */
+ ins.objectid, /* disk_bytenr */
+ ins.offset, /* compressed_len */
+ async_extent->pages, /* compressed_pages */
+ async_extent->nr_pages,
+ async_chunk->write_flags,
+ async_chunk->blkcg_css)) {
+ const u64 start = async_extent->start;
+ const u64 end = start + async_extent->ram_size - 1;
+
+ btrfs_writepage_endio_finish_ordered(inode, NULL, start,
+ end, 0);
+
+ extent_clear_unlock_delalloc(inode, start, end, NULL, 0,
+ PAGE_END_WRITEBACK |
+ PAGE_SET_ERROR);
+ free_async_extent_pages(async_extent);
+ }
+ *alloc_hint = ins.objectid + ins.offset;
+ kfree(async_extent);
+ return ret;
+
out_free_reserve:
btrfs_dec_block_group_reservations(fs_info, ins.objectid);
btrfs_free_reserved_extent(fs_info, ins.objectid, ins.offset, 1);
out_free:
- extent_clear_unlock_delalloc(inode, async_extent->start,
- async_extent->start +
- async_extent->ram_size - 1,
+ extent_clear_unlock_delalloc(inode, start, end,
NULL, EXTENT_LOCKED | EXTENT_DELALLOC |
EXTENT_DELALLOC_NEW |
EXTENT_DEFRAG | EXTENT_DO_ACCOUNTING,
@@ -1003,7 +969,37 @@ static noinline void submit_compressed_extents(struct async_chunk *async_chunk)
PAGE_END_WRITEBACK | PAGE_SET_ERROR);
free_async_extent_pages(async_extent);
kfree(async_extent);
- goto again;
+ return ret;
+}
+
+/*
+ * phase two of compressed writeback. This is the ordered portion
+ * of the code, which only gets called in the order the work was
+ * queued. We walk all the async extents created by compress_file_range
+ * and send them down to the disk.
+ */
+static noinline void submit_compressed_extents(struct async_chunk *async_chunk)
+{
+ struct btrfs_inode *inode = BTRFS_I(async_chunk->inode);
+ struct btrfs_fs_info *fs_info = inode->root->fs_info;
+ struct async_extent *async_extent;
+ u64 alloc_hint = 0;
+ int ret = 0;
+
+ while (!list_empty(&async_chunk->extents)) {
+ async_extent = list_entry(async_chunk->extents.next,
+ struct async_extent, list);
+ list_del(&async_extent->list);
+
+ ret = submit_one_async_extent(inode, async_chunk, async_extent,
+ &alloc_hint);
+ /* Just for developer */
+ btrfs_debug(fs_info,
+"async extent submission failed root=%lld inode=%llu start=%llu len=%llu ret=%d",
+ inode->root->root_key.objectid,
+ btrfs_ino(inode), async_extent->start,
+ async_extent->ram_size, ret);
+ }
}
static u64 get_extent_allocation_hint(struct btrfs_inode *inode, u64 start,
--
2.33.0
next prev parent reply other threads:[~2021-09-27 7:22 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-09-27 7:21 [PATCH v3 00/26] Qu Wenruo
2021-09-27 7:21 ` [PATCH v3 01/26] btrfs: remove unused parameter @nr_pages in add_ra_bio_pages() Qu Wenruo
2021-09-27 7:21 ` [PATCH v3 02/26] btrfs: remove unnecessary parameter @delalloc_start for writepage_delalloc() Qu Wenruo
2021-09-27 7:21 ` [PATCH v3 03/26] btrfs: use async_chunk::async_cow to replace the confusing pending pointer Qu Wenruo
2021-09-27 7:21 ` [PATCH v3 04/26] btrfs: don't pass compressed pages to btrfs_writepage_endio_finish_ordered() Qu Wenruo
2021-09-27 7:21 ` [PATCH v3 05/26] btrfs: make add_ra_bio_pages() to be subpage compatible Qu Wenruo
2021-10-04 17:45 ` David Sterba
2021-10-04 22:39 ` Qu Wenruo
2021-09-27 7:21 ` [PATCH v3 06/26] btrfs: introduce compressed_bio::pending_sectors to trace compressed bio more elegantly Qu Wenruo
2021-09-27 7:21 ` [PATCH v3 07/26] btrfs: add subpage checked bitmap to make PageChecked flag to be subpage compatible Qu Wenruo
2021-09-27 7:21 ` [PATCH v3 08/26] btrfs: handle errors properly inside btrfs_submit_compressed_read() Qu Wenruo
2021-09-27 7:21 ` [PATCH v3 09/26] btrfs: handle errors properly inside btrfs_submit_compressed_write() Qu Wenruo
2021-09-27 7:21 ` [PATCH v3 10/26] btrfs: introduce submit_compressed_bio() for compression Qu Wenruo
2021-10-04 19:42 ` David Sterba
2021-10-04 22:41 ` Qu Wenruo
2021-09-27 7:21 ` [PATCH v3 11/26] btrfs: introduce alloc_compressed_bio() " Qu Wenruo
2021-09-27 7:21 ` [PATCH v3 12/26] btrfs: make btrfs_submit_compressed_read() to determine stripe boundary at bio allocation time Qu Wenruo
2021-10-04 20:10 ` David Sterba
2021-10-05 6:24 ` Qu Wenruo
2021-09-27 7:21 ` [PATCH v3 13/26] btrfs: make btrfs_submit_compressed_write() " Qu Wenruo
2021-09-27 7:21 ` [PATCH v3 14/26] btrfs: remove unused function btrfs_bio_fits_in_stripe() Qu Wenruo
2021-09-27 7:21 ` Qu Wenruo [this message]
2021-10-05 16:33 ` [PATCH v3 15/26] btrfs: refactor submit_compressed_extents() David Sterba
2021-09-27 7:21 ` [PATCH v3 16/26] btrfs: cleanup for extent_write_locked_range() Qu Wenruo
2021-09-27 7:21 ` [PATCH v3 17/26] btrfs: make compress_file_range() to be subpage compatible Qu Wenruo
2021-09-27 7:22 ` [PATCH v3 18/26] btrfs: make btrfs_submit_compressed_write() " Qu Wenruo
2021-09-27 7:22 ` [PATCH v3 19/26] btrfs: make end_compressed_bio_writeback() to be subpage compatble Qu Wenruo
2021-09-27 7:22 ` [PATCH v3 20/26] btrfs: make extent_write_locked_range() to be subpage compatible Qu Wenruo
2021-10-13 14:50 ` Josef Bacik
2021-10-14 0:08 ` Qu Wenruo
2021-09-27 7:22 ` [PATCH v3 21/26] btrfs: extract uncompressed async extent submission code into a new helper Qu Wenruo
2021-09-27 7:22 ` [PATCH v3 22/26] btrfs: rework lzo_compress_pages() to make it subpage compatible Qu Wenruo
2021-09-27 7:22 ` [PATCH v3 23/26] btrfs: teach __extent_writepage() to handle locked page differently Qu Wenruo
2021-09-27 7:22 ` [PATCH v3 24/26] btrfs: allow page to be unlocked by btrfs_page_end_writer_lock() even if it's locked by plain page_lock() Qu Wenruo
2021-09-27 7:22 ` [PATCH v3 25/26] btrfs: don't run delalloc range which is beyond the locked_page to prevent deadlock for subpage compression Qu Wenruo
2021-09-27 7:22 ` [PATCH v3 26/26] btrfs: only allow subpage compression if the range is fully page aligned Qu Wenruo
2021-10-05 17:21 ` [PATCH v3 00/26] David Sterba
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210927072208.21634-16-wqu@suse.com \
--to=wqu@suse.com \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox