linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Boris Burkov <boris@bur.io>
To: Josef Bacik <josef@toxicpanda.com>
Cc: linux-btrfs@vger.kernel.org, kernel-team@fb.com
Subject: Re: [PATCH 3/3] btrfs: cycle through the RAID profiles as a last resort
Date: Thu, 20 Jul 2023 15:28:17 -0700	[thread overview]
Message-ID: <20230720222817.GB545904@zen> (raw)
In-Reply-To: <4beedde9b4f6adf4a7054707617f8784e5ee8b35.1689883754.git.josef@toxicpanda.com>

On Thu, Jul 20, 2023 at 04:12:16PM -0400, Josef Bacik wrote:
> Instead of looping through the RAID indices before advancing the FFE
> loop lets move this until after we've exhausted the entire FFE loop in
> order to give us the highest chance of success in satisfying the
> allocation based on its flags.

Doesn't this get screwed by the find_free_extent_update_loop setting
index to 0?

i.e., let's say we fail on the first pass with the correct raid flag.
then we go into find_free_extent_update_loop and intelligently don't do
the pointless raid loops. But then we set index to 0 and start doing an
even worse meta loop of doing every step (including allocating chunks)
with every raid index, most of which are doomed to fail by definition.

Not setting it to 0, OTOH, breaks the logic for setting "full_search",
but I do think that could be fixed one way or another.

> 
> Signed-off-by: Josef Bacik <josef@toxicpanda.com>
> ---
>  fs/btrfs/extent-tree.c | 9 +++++----
>  1 file changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
> index 8db2673948c9..ca4277ec1b19 100644
> --- a/fs/btrfs/extent-tree.c
> +++ b/fs/btrfs/extent-tree.c
> @@ -3953,10 +3953,6 @@ static int find_free_extent_update_loop(struct btrfs_fs_info *fs_info,
>  		return 0;
>  	}
>  
> -	ffe_ctl->index++;
> -	if (ffe_ctl->index < BTRFS_NR_RAID_TYPES)
> -		return 1;
> -
>  	/*
>  	 * See the comment for btrfs_loop_type for an explanation of the phases.
>  	 */
> @@ -4026,6 +4022,11 @@ static int find_free_extent_update_loop(struct btrfs_fs_info *fs_info,
>  		}
>  		return 1;
>  	}
> +
> +	ffe_ctl->index++;
> +	if (ffe_ctl->index < BTRFS_NR_RAID_TYPES)
> +		return 1;
> +
>  	return -ENOSPC;
>  }
>  
> -- 
> 2.41.0
> 

  reply	other threads:[~2023-07-20 22:29 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-20 20:12 [PATCH 0/3] btrfs: fix generic/475 hang Josef Bacik
2023-07-20 20:12 ` [PATCH 1/3] btrfs: wait for block groups to finish caching during allocation Josef Bacik
2023-07-20 22:21   ` Boris Burkov
2023-07-20 20:12 ` [PATCH 2/3] btrfs: move comments to btrfs_loop_type definition Josef Bacik
2023-07-20 22:28   ` Boris Burkov
2023-07-27 16:32   ` David Sterba
2023-07-20 20:12 ` [PATCH 3/3] btrfs: cycle through the RAID profiles as a last resort Josef Bacik
2023-07-20 22:28   ` Boris Burkov [this message]
2023-07-21  1:07     ` Josef Bacik
2023-07-25 14:37   ` kernel test robot
2023-07-21  7:36 ` [PATCH 0/3] btrfs: fix generic/475 hang Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230720222817.GB545904@zen \
    --to=boris@bur.io \
    --cc=josef@toxicpanda.com \
    --cc=kernel-team@fb.com \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).