public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Qu Wenruo <quwenruo.btrfs@gmx.com>
To: Christoph Hellwig <hch@lst.de>,
	Josef Bacik <josef@toxicpanda.com>,
	David Sterba <dsterba@suse.com>, Qu Wenruo <wqu@suse.com>
Cc: Naohiro Aota <naohiro.aota@wdc.com>,
	linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org
Subject: Re: [PATCH 27/40] btrfs: clean up the raid map handling __btrfs_map_block
Date: Wed, 23 Mar 2022 09:08:44 +0800	[thread overview]
Message-ID: <0bcf1be8-cbbe-1978-9d7b-eed52ebacc57@gmx.com> (raw)
In-Reply-To: <20220322155606.1267165-28-hch@lst.de>



On 2022/3/22 23:55, Christoph Hellwig wrote:
> Clear need_raid_map early instead of repeating the same conditional over
> and over.

I had a more comprehensive cleanup, but only for scrub:
https://lore.kernel.org/linux-btrfs/cover.1646984153.git.wqu@suse.com/

All profiles are split into 3 categories:

- Simple mirror
   Single, DUP, RAID1*.

- Simple stripe
   RAID0 and RAID10
   Inside each data stripe, it's just simple mirror then.

- RAID56
   And for mirror_num == 0/1 cases, inside one data stripe it's still
   simple mirror.

Maybe we can follow the same ideas here?

Thanks,
Qu


>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   fs/btrfs/volumes.c | 60 ++++++++++++++++++++++------------------------
>   1 file changed, 29 insertions(+), 31 deletions(-)
>
> diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
> index 1cf0914b33847..cc9e2565e4b64 100644
> --- a/fs/btrfs/volumes.c
> +++ b/fs/btrfs/volumes.c
> @@ -6435,6 +6435,10 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info,
>
>   	map = em->map_lookup;
>
> +	if (!(map->type & BTRFS_BLOCK_GROUP_RAID56_MASK) ||
> +	    (!need_full_stripe(op) && mirror_num <= 1))
> +		need_raid_map = 0;
> +
>   	*length = geom.len;
>   	stripe_len = geom.stripe_len;
>   	stripe_nr = geom.stripe_nr;
> @@ -6509,37 +6513,32 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info,
>   					      dev_replace_is_ongoing);
>   			mirror_num = stripe_index - old_stripe_index + 1;
>   		}
> +	} else if (need_raid_map) {
> +		/* push stripe_nr back to the start of the full stripe */
> +		stripe_nr = div64_u64(raid56_full_stripe_start,
> +				      stripe_len * data_stripes);
>
> -	} else if (map->type & BTRFS_BLOCK_GROUP_RAID56_MASK) {
> -		if (need_raid_map && (need_full_stripe(op) || mirror_num > 1)) {
> -			/* push stripe_nr back to the start of the full stripe */
> -			stripe_nr = div64_u64(raid56_full_stripe_start,
> -					stripe_len * data_stripes);
> -
> -			/* RAID[56] write or recovery. Return all stripes */
> -			num_stripes = map->num_stripes;
> -			max_errors = nr_parity_stripes(map);
> -
> -			*length = map->stripe_len;
> -			stripe_index = 0;
> -			stripe_offset = 0;
> -		} else {
> -			/*
> -			 * Mirror #0 or #1 means the original data block.
> -			 * Mirror #2 is RAID5 parity block.
> -			 * Mirror #3 is RAID6 Q block.
> -			 */
> -			stripe_nr = div_u64_rem(stripe_nr,
> -					data_stripes, &stripe_index);
> -			if (mirror_num > 1)
> -				stripe_index = data_stripes + mirror_num - 2;
> +		/* RAID[56] write or recovery. Return all stripes */
> +		num_stripes = map->num_stripes;
> +		max_errors = nr_parity_stripes(map);
>
> -			/* We distribute the parity blocks across stripes */
> -			div_u64_rem(stripe_nr + stripe_index, map->num_stripes,
> -					&stripe_index);
> -			if (!need_full_stripe(op) && mirror_num <= 1)
> -				mirror_num = 1;
> -		}
> +		*length = map->stripe_len;
> +		stripe_index = 0;
> +		stripe_offset = 0;
> +	} else if (map->type & BTRFS_BLOCK_GROUP_RAID56_MASK) {
> +		/*
> +		 * Mirror #0 or #1 means the original data block.
> +		 * Mirror #2 is RAID5 parity block.
> +		 * Mirror #3 is RAID6 Q block.
> +		 */
> +		stripe_nr = div_u64_rem(stripe_nr, data_stripes, &stripe_index);
> +		if (mirror_num > 1)
> +			stripe_index = data_stripes + mirror_num - 2;
> +		/* We distribute the parity blocks across stripes */
> +		div_u64_rem(stripe_nr + stripe_index, map->num_stripes,
> +			    &stripe_index);
> +		if (!need_full_stripe(op) && mirror_num <= 1)
> +			mirror_num = 1;
>   	} else {
>   		/*
>   		 * after this, stripe_nr is the number of stripes on this
> @@ -6581,8 +6580,7 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info,
>   	}
>
>   	/* Build raid_map */
> -	if (map->type & BTRFS_BLOCK_GROUP_RAID56_MASK && need_raid_map &&
> -	    (need_full_stripe(op) || mirror_num > 1)) {
> +	if (need_raid_map) {
>   		u64 tmp;
>   		unsigned rot;
>

  reply	other threads:[~2022-03-23  1:09 UTC|newest]

Thread overview: 81+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-22 15:55 RFC: cleanup btrfs bio handling Christoph Hellwig
2022-03-22 15:55 ` [PATCH 01/40] btrfs: fix submission hook error handling in btrfs_repair_one_sector Christoph Hellwig
2022-03-22 15:55 ` [PATCH 02/40] btrfs: fix direct I/O read repair for split bios Christoph Hellwig
2022-03-22 23:59   ` Qu Wenruo
2022-03-23  6:03     ` Christoph Hellwig
2022-03-22 15:55 ` [PATCH 03/40] btrfs: fix direct I/O writes for split bios on zoned devices Christoph Hellwig
2022-03-23  0:00   ` Qu Wenruo
2022-03-23  6:04     ` Christoph Hellwig
2022-03-22 15:55 ` [PATCH 04/40] btrfs: fix and document the zoned device choice in alloc_new_bio Christoph Hellwig
2022-03-22 15:55 ` [PATCH 05/40] btrfs: refactor __btrfsic_submit_bio Christoph Hellwig
2022-03-22 15:55 ` [PATCH 06/40] btrfs: split submit_bio from btrfsic checking Christoph Hellwig
2022-03-23  0:04   ` Qu Wenruo
2022-03-22 15:55 ` [PATCH 07/40] btrfs: simplify btrfsic_read_block Christoph Hellwig
2022-03-22 15:55 ` [PATCH 08/40] btrfs: simplify repair_io_failure Christoph Hellwig
2022-03-23  0:06   ` Qu Wenruo
2022-03-22 15:55 ` [PATCH 09/40] btrfs: simplify scrub_recheck_block Christoph Hellwig
2022-03-23  0:10   ` Qu Wenruo
2022-03-23  6:05     ` Christoph Hellwig
2022-03-22 15:55 ` [PATCH 10/40] btrfs: simplify scrub_repair_page_from_good_copy Christoph Hellwig
2022-03-23  0:12   ` Qu Wenruo
2022-03-22 15:55 ` [PATCH 11/40] btrfs: move the call to bio_set_dev out of submit_stripe_bio Christoph Hellwig
2022-03-22 15:55 ` [PATCH 12/40] btrfs: pass a block_device to btrfs_bio_clone Christoph Hellwig
2022-03-22 15:55 ` [PATCH 13/40] btrfs: initialize ->bi_opf and ->bi_private in rbio_add_io_page Christoph Hellwig
2022-03-22 15:55 ` [PATCH 14/40] btrfs: don't allocate a btrfs_bio for raid56 per-stripe bios Christoph Hellwig
2022-03-23  0:16   ` Qu Wenruo
2022-03-22 15:55 ` [PATCH 15/40] btrfs: don't allocate a btrfs_bio for scrub bios Christoph Hellwig
2022-03-23  0:18   ` Qu Wenruo
2022-03-22 15:55 ` [PATCH 16/40] btrfs: stop using the btrfs_bio saved iter in index_rbio_pages Christoph Hellwig
2022-03-22 15:55 ` [PATCH 17/40] btrfs: remove the submit_bio_hook argument to submit_read_repair Christoph Hellwig
2022-03-23  0:20   ` Qu Wenruo
2022-03-23  6:06     ` Christoph Hellwig
2022-03-22 15:55 ` [PATCH 18/40] btrfs: move more work into btrfs_end_bioc Christoph Hellwig
2022-03-23  0:29   ` Qu Wenruo
2022-03-22 15:55 ` [PATCH 19/40] btrfs: defer I/O completion based on the btrfs_raid_bio Christoph Hellwig
2022-03-22 15:55 ` [PATCH 20/40] btrfs: cleanup btrfs_submit_metadata_bio Christoph Hellwig
2022-03-23  0:34   ` Qu Wenruo
2022-03-22 15:55 ` [PATCH 21/40] btrfs: cleanup btrfs_submit_data_bio Christoph Hellwig
2022-03-23  0:44   ` Qu Wenruo
2022-03-23  6:08     ` Christoph Hellwig
2022-03-22 15:55 ` [PATCH 22/40] btrfs: cleanup btrfs_submit_dio_bio Christoph Hellwig
2022-03-23  0:50   ` Qu Wenruo
2022-03-23  6:09     ` Christoph Hellwig
2022-03-22 15:55 ` [PATCH 23/40] btrfs: store an inode pointer in struct btrfs_bio Christoph Hellwig
2022-03-23  0:54   ` Qu Wenruo
2022-03-23  6:11     ` Christoph Hellwig
2022-03-22 15:55 ` [PATCH 24/40] btrfs: remove btrfs_end_io_wq Christoph Hellwig
2022-03-23  0:57   ` Qu Wenruo
2022-03-23  6:11     ` Christoph Hellwig
2022-03-22 15:55 ` [PATCH 25/40] btrfs: remove btrfs_wq_submit_bio Christoph Hellwig
2022-03-22 15:55 ` [PATCH 26/40] btrfs: refactor btrfs_map_bio Christoph Hellwig
2022-03-23  1:03   ` Qu Wenruo
2022-03-22 15:55 ` [PATCH 27/40] btrfs: clean up the raid map handling __btrfs_map_block Christoph Hellwig
2022-03-23  1:08   ` Qu Wenruo [this message]
2022-03-23  6:13     ` Christoph Hellwig
2022-03-22 15:55 ` [PATCH 28/40] btrfs: do not allocate a btrfs_io_context in btrfs_map_bio Christoph Hellwig
2022-03-23  1:14   ` Qu Wenruo
2022-03-23  6:13     ` Christoph Hellwig
2022-03-23  6:59       ` Qu Wenruo
2022-03-23  7:10         ` Christoph Hellwig
2022-03-22 15:55 ` [PATCH 29/40] btrfs: do not allocate a btrfs_bio for low-level bios Christoph Hellwig
2022-03-22 15:55 ` [PATCH 30/40] iomap: add per-iomap_iter private data Christoph Hellwig
2022-03-22 15:55 ` [PATCH 31/40] iomap: add a new ->iomap_iter operation Christoph Hellwig
2022-03-22 15:55 ` [PATCH 32/40] iomap: optionally allocate dio bios from a file system bio_set Christoph Hellwig
2022-03-22 15:55 ` [PATCH 33/40] iomap: add a hint to ->submit_io if there is more I/O coming Christoph Hellwig
2022-03-22 15:56 ` [PATCH 34/40] btrfs: add a btrfs_dio_rw wrapper Christoph Hellwig
2022-03-22 15:56 ` [PATCH 35/40] btrfs: allocate dio_data on stack Christoph Hellwig
2022-03-22 15:56 ` [PATCH 36/40] btrfs: implement ->iomap_iter Christoph Hellwig
2022-03-22 15:56 ` [PATCH 37/40] btrfs: add a btrfs_get_stripe_info helper Christoph Hellwig
2022-03-23  1:23   ` Qu Wenruo
2022-03-22 15:56 ` [PATCH 38/40] btrfs: return a blk_status_t from btrfs_repair_one_sector Christoph Hellwig
2022-03-22 15:56 ` [PATCH 39/40] btrfs: pass private data end end_io handler to btrfs_repair_one_sector Christoph Hellwig
2022-03-23  1:28   ` Qu Wenruo
2022-03-23  6:15     ` Christoph Hellwig
2022-03-24  0:57   ` Sweet Tea Dorminy
2022-03-22 15:56 ` [PATCH 40/40] btrfs: use the iomap direct I/O bio directly Christoph Hellwig
2022-03-23  1:39   ` Qu Wenruo
2022-03-23  6:17     ` Christoph Hellwig
2022-03-23  8:02       ` Qu Wenruo
2022-03-23  8:11         ` Christoph Hellwig
2022-03-23  8:36           ` Qu Wenruo
2022-03-22 17:46 ` RFC: cleanup btrfs bio handling Johannes Thumshirn

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0bcf1be8-cbbe-1978-9d7b-eed52ebacc57@gmx.com \
    --to=quwenruo.btrfs@gmx.com \
    --cc=dsterba@suse.com \
    --cc=hch@lst.de \
    --cc=josef@toxicpanda.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=naohiro.aota@wdc.com \
    --cc=wqu@suse.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox