public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Qu Wenruo <quwenruo.btrfs@gmx.com>
To: Christoph Hellwig <hch@lst.de>, David Sterba <dsterba@suse.com>,
	Josef Bacik <josef@toxicpanda.com>, Qu Wenruo <wqu@suse.com>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: [PATCH 05/10] btrfs: defer I/O completion based on the btrfs_raid_bio
Date: Sun, 1 May 2022 12:53:00 +0800	[thread overview]
Message-ID: <6dba9162-c64d-2d27-12eb-d48ac6a4ac8a@gmx.com> (raw)
In-Reply-To: <4e93a857-43f2-9e67-9ef8-4db00edd2f6c@gmx.com>



On 2022/5/1 12:40, Qu Wenruo wrote:
>
>
> On 2022/4/29 22:30, Christoph Hellwig wrote:
>> Instead of attaching a an extra allocation an indirect call to each
>> low-level bio issued by the RAID code, add a work_struct to struct
>> btrfs_raid_bio and only defer the per-rbio completion action.  The
>> per-bio action for all the I/Os are trivial and can be safely done
>> from interrupt context.
>>
>> As a nice side effect this also allows sharing the boilerplate code
>> for the per-bio completions
>>
>> Signed-off-by: Christoph Hellwig <hch@lst.de>
>
> It looks like this patch is causing test failure in btrfs/027, at least
> for subapge (64K page size, 4K sectorsize) cases.

Also confirmed the same hang on the same commit on x86_64 (4K page size
4K sectorsize).

Also 100% reproducible.

Thanks,
Qu

>
> Reproducibility is 100% (4/4 tried).
>
> The hanging sub-test case is the repalcing of a missing device in raid5.
>
> The involved dmesg (including the hanging thread dump) is:
>
> [  276.672541] BTRFS warning (device dm-1): read-write for sector size
> 4096 with page size 65536 is experimental
> [  276.744316] BTRFS info (device dm-1): checking UUID tree
> [  277.387701] BTRFS info (device dm-1): allowing degraded mounts
> [  277.390314] BTRFS info (device dm-1): using free space tree
> [  277.392108] BTRFS info (device dm-1): has skinny extents
> [  277.393890] BTRFS warning (device dm-1): read-write for sector size
> 4096 with page size 65536 is experimental
> [  277.420922] BTRFS warning (device dm-1): devid 2 uuid
> 4b67464d-e851-4a88-8765-67b043d4680f is missing
> [  277.432694] BTRFS warning (device dm-1): devid 2 uuid
> 4b67464d-e851-4a88-8765-67b043d4680f is missing
> [  277.648326] BTRFS info (device dm-1): dev_replace from <missing disk>
> (devid 2) to /dev/mapper/test-scratch5 started
> [  297.264371] task:btrfs           state:D stack:    0 pid: 7158 ppid:
>   6493 flags:0x0000000c
> [  297.280744] Call trace:
> [  297.282351]  __switch_to+0xfc/0x160
> [  297.284525]  __schedule+0x260/0x61c
> [  297.286959]  schedule+0x54/0xc4
> [  297.288980]  scrub_enumerate_chunks+0x610/0x760 [btrfs]
> [  297.292504]  btrfs_scrub_dev+0x1a0/0x530 [btrfs]
> [  297.306738]  btrfs_dev_replace_start+0x2a4/0x2d0 [btrfs]
> [  297.310418]  btrfs_dev_replace_by_ioctl+0x48/0x84 [btrfs]
> [  297.314026]  btrfs_ioctl_dev_replace+0x1b8/0x210 [btrfs]
> [  297.328014]  btrfs_ioctl+0xa48/0x1a70 [btrfs]
> [  297.330705]  __arm64_sys_ioctl+0xb4/0x100
> [  297.333037]  invoke_syscall+0x50/0x120
> [  297.343237]  el0_svc_common.constprop.0+0x4c/0x100
> [  297.345716]  do_el0_svc+0x34/0xa0
> [  297.347242]  el0_svc+0x34/0xb0
> [  297.348763]  el0t_64_sync_handler+0xa8/0x130
> [  297.350870]  el0t_64_sync+0x18c/0x190
>
> Mind to take a look on that hang?
>
> Thanks,
> Qu
>
>> ---
>>   fs/btrfs/ctree.h   |   2 +-
>>   fs/btrfs/disk-io.c |  12 ++---
>>   fs/btrfs/disk-io.h |   1 -
>>   fs/btrfs/raid56.c  | 111 ++++++++++++++++++---------------------------
>>   4 files changed, 49 insertions(+), 77 deletions(-)
>>
>> diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
>> index 40a6f61559348..4dd0d4a2e7757 100644
>> --- a/fs/btrfs/ctree.h
>> +++ b/fs/btrfs/ctree.h
>> @@ -853,7 +853,7 @@ struct btrfs_fs_info {
>>       struct btrfs_workqueue *flush_workers;
>>       struct btrfs_workqueue *endio_workers;
>>       struct btrfs_workqueue *endio_meta_workers;
>> -    struct btrfs_workqueue *endio_raid56_workers;
>> +    struct workqueue_struct *endio_raid56_workers;
>>       struct workqueue_struct *rmw_workers;
>>       struct btrfs_workqueue *endio_meta_write_workers;
>>       struct btrfs_workqueue *endio_write_workers;
>> diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
>> index 73e12ecc81be1..3c6137734d28c 100644
>> --- a/fs/btrfs/disk-io.c
>> +++ b/fs/btrfs/disk-io.c
>> @@ -753,14 +753,10 @@ static void end_workqueue_bio(struct bio *bio)
>>               wq = fs_info->endio_meta_write_workers;
>>           else if (end_io_wq->metadata == BTRFS_WQ_ENDIO_FREE_SPACE)
>>               wq = fs_info->endio_freespace_worker;
>> -        else if (end_io_wq->metadata == BTRFS_WQ_ENDIO_RAID56)
>> -            wq = fs_info->endio_raid56_workers;
>>           else
>>               wq = fs_info->endio_write_workers;
>>       } else {
>> -        if (end_io_wq->metadata == BTRFS_WQ_ENDIO_RAID56)
>> -            wq = fs_info->endio_raid56_workers;
>> -        else if (end_io_wq->metadata)
>> +        if (end_io_wq->metadata)
>>               wq = fs_info->endio_meta_workers;
>>           else
>>               wq = fs_info->endio_workers;
>> @@ -2274,7 +2270,8 @@ static void btrfs_stop_all_workers(struct
>> btrfs_fs_info *fs_info)
>>       btrfs_destroy_workqueue(fs_info->hipri_workers);
>>       btrfs_destroy_workqueue(fs_info->workers);
>>       btrfs_destroy_workqueue(fs_info->endio_workers);
>> -    btrfs_destroy_workqueue(fs_info->endio_raid56_workers);
>> +    if (fs_info->endio_raid56_workers)
>> +        destroy_workqueue(fs_info->endio_raid56_workers);
>>       if (fs_info->rmw_workers)
>>           destroy_workqueue(fs_info->rmw_workers);
>>       btrfs_destroy_workqueue(fs_info->endio_write_workers);
>> @@ -2477,8 +2474,7 @@ static int btrfs_init_workqueues(struct
>> btrfs_fs_info *fs_info)
>>           btrfs_alloc_workqueue(fs_info, "endio-meta-write", flags,
>>                         max_active, 2);
>>       fs_info->endio_raid56_workers =
>> -        btrfs_alloc_workqueue(fs_info, "endio-raid56", flags,
>> -                      max_active, 4);
>> +        alloc_workqueue("btrfs-endio-raid56", flags, max_active);
>>       fs_info->rmw_workers = alloc_workqueue("btrfs-rmw", flags,
>> max_active);
>>       fs_info->endio_write_workers =
>>           btrfs_alloc_workqueue(fs_info, "endio-write", flags,
>> diff --git a/fs/btrfs/disk-io.h b/fs/btrfs/disk-io.h
>> index 9340e3266e0ac..97255e3d7e524 100644
>> --- a/fs/btrfs/disk-io.h
>> +++ b/fs/btrfs/disk-io.h
>> @@ -21,7 +21,6 @@ enum btrfs_wq_endio_type {
>>       BTRFS_WQ_ENDIO_DATA,
>>       BTRFS_WQ_ENDIO_METADATA,
>>       BTRFS_WQ_ENDIO_FREE_SPACE,
>> -    BTRFS_WQ_ENDIO_RAID56,
>>   };
>>
>>   static inline u64 btrfs_sb_offset(int mirror)
>> diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
>> index a5b623ee6facd..1a3c1a9b10d0b 100644
>> --- a/fs/btrfs/raid56.c
>> +++ b/fs/btrfs/raid56.c
>> @@ -164,6 +164,9 @@ struct btrfs_raid_bio {
>>       atomic_t stripes_pending;
>>
>>       atomic_t error;
>> +
>> +    struct work_struct end_io_work;
>> +
>>       /*
>>        * these are two arrays of pointers.  We allocate the
>>        * rbio big enough to hold them both and setup their
>> @@ -1552,15 +1555,7 @@ static void set_bio_pages_uptodate(struct
>> btrfs_raid_bio *rbio, struct bio *bio)
>>       }
>>   }
>>
>> -/*
>> - * end io for the read phase of the rmw cycle.  All the bios here are
>> physical
>> - * stripe bios we've read from the disk so we can recalculate the
>> parity of the
>> - * stripe.
>> - *
>> - * This will usually kick off finish_rmw once all the bios are read
>> in, but it
>> - * may trigger parity reconstruction if we had any errors along the way
>> - */
>> -static void raid_rmw_end_io(struct bio *bio)
>> +static void raid56_bio_end_io(struct bio *bio)
>>   {
>>       struct btrfs_raid_bio *rbio = bio->bi_private;
>>
>> @@ -1571,23 +1566,34 @@ static void raid_rmw_end_io(struct bio *bio)
>>
>>       bio_put(bio);
>>
>> -    if (!atomic_dec_and_test(&rbio->stripes_pending))
>> -        return;
>> +    if (atomic_dec_and_test(&rbio->stripes_pending))
>> +        queue_work(rbio->bioc->fs_info->endio_raid56_workers,
>> +               &rbio->end_io_work);
>> +}
>>
>> -    if (atomic_read(&rbio->error) > rbio->bioc->max_errors)
>> -        goto cleanup;
>> +/*
>> + * End io handler for the read phase of the rmw cycle.  All the bios
>> here are
>> + * physical stripe bios we've read from the disk so we can
>> recalculate the
>> + * parity of the stripe.
>> + *
>> + * This will usually kick off finish_rmw once all the bios are read
>> in, but it
>> + * may trigger parity reconstruction if we had any errors along the way
>> + */
>> +static void raid56_rmw_end_io_work(struct work_struct *work)
>> +{
>> +    struct btrfs_raid_bio *rbio =
>> +        container_of(work, struct btrfs_raid_bio, end_io_work);
>> +
>> +    if (atomic_read(&rbio->error) > rbio->bioc->max_errors) {
>> +        rbio_orig_end_io(rbio, BLK_STS_IOERR);
>> +        return;
>> +    }
>>
>>       /*
>> -     * this will normally call finish_rmw to start our write
>> -     * but if there are any failed stripes we'll reconstruct
>> -     * from parity first
>> +     * This will normally call finish_rmw to start our write but if
>> there
>> +     * are any failed stripes we'll reconstruct from parity first.
>>        */
>>       validate_rbio_for_rmw(rbio);
>> -    return;
>> -
>> -cleanup:
>> -
>> -    rbio_orig_end_io(rbio, BLK_STS_IOERR);
>>   }
>>
>>   /*
>> @@ -1662,11 +1668,9 @@ static int raid56_rmw_stripe(struct
>> btrfs_raid_bio *rbio)
>>        * touch it after that.
>>        */
>>       atomic_set(&rbio->stripes_pending, bios_to_read);
>> +    INIT_WORK(&rbio->end_io_work, raid56_rmw_end_io_work);
>>       while ((bio = bio_list_pop(&bio_list))) {
>> -        bio->bi_end_io = raid_rmw_end_io;
>> -
>> -        btrfs_bio_wq_end_io(rbio->bioc->fs_info, bio,
>> BTRFS_WQ_ENDIO_RAID56);
>> -
>> +        bio->bi_end_io = raid56_bio_end_io;
>>           submit_bio(bio);
>>       }
>>       /* the actual write will happen once the reads are done */
>> @@ -2108,25 +2112,13 @@ static void __raid_recover_end_io(struct
>> btrfs_raid_bio *rbio)
>>   }
>>
>>   /*
>> - * This is called only for stripes we've read from disk to
>> - * reconstruct the parity.
>> + * This is called only for stripes we've read from disk to
>> reconstruct the
>> + * parity.
>>    */
>> -static void raid_recover_end_io(struct bio *bio)
>> +static void raid_recover_end_io_work(struct work_struct *work)
>>   {
>> -    struct btrfs_raid_bio *rbio = bio->bi_private;
>> -
>> -    /*
>> -     * we only read stripe pages off the disk, set them
>> -     * up to date if there were no errors
>> -     */
>> -    if (bio->bi_status)
>> -        fail_bio_stripe(rbio, bio);
>> -    else
>> -        set_bio_pages_uptodate(rbio, bio);
>> -    bio_put(bio);
>> -
>> -    if (!atomic_dec_and_test(&rbio->stripes_pending))
>> -        return;
>> +    struct btrfs_raid_bio *rbio =
>> +        container_of(work, struct btrfs_raid_bio, end_io_work);
>>
>>       if (atomic_read(&rbio->error) > rbio->bioc->max_errors)
>>           rbio_orig_end_io(rbio, BLK_STS_IOERR);
>> @@ -2209,11 +2201,9 @@ static int __raid56_parity_recover(struct
>> btrfs_raid_bio *rbio)
>>        * touch it after that.
>>        */
>>       atomic_set(&rbio->stripes_pending, bios_to_read);
>> +    INIT_WORK(&rbio->end_io_work, raid_recover_end_io_work);
>>       while ((bio = bio_list_pop(&bio_list))) {
>> -        bio->bi_end_io = raid_recover_end_io;
>> -
>> -        btrfs_bio_wq_end_io(rbio->bioc->fs_info, bio,
>> BTRFS_WQ_ENDIO_RAID56);
>> -
>> +        bio->bi_end_io = raid56_bio_end_io;
>>           submit_bio(bio);
>>       }
>>
>> @@ -2582,8 +2572,7 @@ static noinline void finish_parity_scrub(struct
>> btrfs_raid_bio *rbio,
>>       atomic_set(&rbio->stripes_pending, nr_data);
>>
>>       while ((bio = bio_list_pop(&bio_list))) {
>> -        bio->bi_end_io = raid_write_end_io;
>> -
>> +        bio->bi_end_io = raid56_bio_end_io;
>>           submit_bio(bio);
>>       }
>>       return;
>> @@ -2671,24 +2660,14 @@ static void
>> validate_rbio_for_parity_scrub(struct btrfs_raid_bio *rbio)
>>    * This will usually kick off finish_rmw once all the bios are read
>> in, but it
>>    * may trigger parity reconstruction if we had any errors along the way
>>    */
>> -static void raid56_parity_scrub_end_io(struct bio *bio)
>> +static void raid56_parity_scrub_end_io_work(struct work_struct *work)
>>   {
>> -    struct btrfs_raid_bio *rbio = bio->bi_private;
>> -
>> -    if (bio->bi_status)
>> -        fail_bio_stripe(rbio, bio);
>> -    else
>> -        set_bio_pages_uptodate(rbio, bio);
>> -
>> -    bio_put(bio);
>> -
>> -    if (!atomic_dec_and_test(&rbio->stripes_pending))
>> -        return;
>> +    struct btrfs_raid_bio *rbio =
>> +        container_of(work, struct btrfs_raid_bio, end_io_work);
>>
>>       /*
>> -     * this will normally call finish_rmw to start our write
>> -     * but if there are any failed stripes we'll reconstruct
>> -     * from parity first
>> +     * This will normally call finish_rmw to start our write, but if
>> there
>> +     * are any failed stripes we'll reconstruct from parity first
>>        */
>>       validate_rbio_for_parity_scrub(rbio);
>>   }
>> @@ -2758,11 +2737,9 @@ static void raid56_parity_scrub_stripe(struct
>> btrfs_raid_bio *rbio)
>>        * touch it after that.
>>        */
>>       atomic_set(&rbio->stripes_pending, bios_to_read);
>> +    INIT_WORK(&rbio->end_io_work, raid56_parity_scrub_end_io_work);
>>       while ((bio = bio_list_pop(&bio_list))) {
>> -        bio->bi_end_io = raid56_parity_scrub_end_io;
>> -
>> -        btrfs_bio_wq_end_io(rbio->bioc->fs_info, bio,
>> BTRFS_WQ_ENDIO_RAID56);
>> -
>> +        bio->bi_end_io = raid56_bio_end_io;
>>           submit_bio(bio);
>>       }
>>       /* the actual write will happen once the reads are done */

  reply	other threads:[~2022-05-01  4:53 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-29 14:30 cleanup btrfs bio handling, part 2 v2 Christoph Hellwig
2022-04-29 14:30 ` [PATCH 01/10] btrfs: move more work into btrfs_end_bioc Christoph Hellwig
2022-04-29 14:30 ` [PATCH 02/10] btrfs: cleanup btrfs_submit_dio_bio Christoph Hellwig
2022-04-29 14:30 ` [PATCH 03/10] btrfs: split btrfs_submit_data_bio Christoph Hellwig
2022-04-29 14:30 ` [PATCH 04/10] btrfs: don't double-defer bio completions for compressed reads Christoph Hellwig
2022-04-29 14:30 ` [PATCH 05/10] btrfs: defer I/O completion based on the btrfs_raid_bio Christoph Hellwig
2022-05-01  4:40   ` Qu Wenruo
2022-05-01  4:53     ` Qu Wenruo [this message]
2022-05-02 16:44       ` Christoph Hellwig
2022-06-03 16:44       ` David Sterba
2022-06-03 16:45         ` David Sterba
2022-04-29 14:30 ` [PATCH 06/10] btrfs: don't use btrfs_bio_wq_end_io for compressed writes Christoph Hellwig
2022-04-29 14:30 ` [PATCH 07/10] btrfs: centralize setting REQ_META Christoph Hellwig
2022-04-29 14:30 ` [PATCH 08/10] btrfs: remove btrfs_end_io_wq Christoph Hellwig
2022-04-29 14:30 ` [PATCH 09/10] btrfs: refactor btrfs_map_bio Christoph Hellwig
2022-04-29 14:30 ` [PATCH 10/10] btrfs: do not allocate a btrfs_bio for low-level bios Christoph Hellwig
  -- strict thread matches above, loose matches on Subject: below --
2022-05-04 12:25 cleanup btrfs bio handling, part 2 v3 Christoph Hellwig
2022-05-04 12:25 ` [PATCH 05/10] btrfs: defer I/O completion based on the btrfs_raid_bio Christoph Hellwig
2022-04-25  7:54 cleanup btrfs bio handling, part 2 Christoph Hellwig
2022-04-25  7:54 ` [PATCH 05/10] btrfs: defer I/O completion based on the btrfs_raid_bio Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6dba9162-c64d-2d27-12eb-d48ac6a4ac8a@gmx.com \
    --to=quwenruo.btrfs@gmx.com \
    --cc=dsterba@suse.com \
    --cc=hch@lst.de \
    --cc=josef@toxicpanda.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=wqu@suse.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox