linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jorge Bastos <jorge.mrbastos@gmail.com>
To: Qu Wenruo <quwenruo.btrfs@gmx.com>
Cc: Btrfs BTRFS <linux-btrfs@vger.kernel.org>
Subject: Re: RAID5 scrub performance
Date: Thu, 28 Nov 2019 09:24:50 +0000	[thread overview]
Message-ID: <CAHzMYBS5asoCqa-DCjutED69SyvXVx+ht7x_QsJZyJTNZUcOiQ@mail.gmail.com> (raw)
In-Reply-To: <2b0e5191-740f-0530-4825-0b0b6b653efb@gmx.com>

HI,

Thanks for the reply, but I'm not sure I understand, if I start the
scrub for a single device on the raid5 pool it still scrubs the whole
filesystem, and speeds are the same.

Jorge




On Thu, Nov 28, 2019 at 12:01 AM Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>
>
>
> On 2019/11/27 下午11:11, Jorge Bastos wrote:
> > I believe this is a known issue but wonder if there's something I can
> > do do optimize raid5 scrub speed, or if anything is in the works to
> > improve it.
> >
> > kernel 5.3.8
> > btrfs-progs 5.3.1
> >
> >
> > Single disk filesystem is performing as expected:
> >
> > UUID:             9c0ed213-d9c5-4e93-b9db-218b43533c15
> > Scrub started:    Tue Nov 26 21:58:20 2019
> > Status:           finished
> > Duration:         2:24:32
> > Total to scrub:   1.04TiB
> > Rate:             125.17MiB/s
> > Error summary:    no errors found
> >
> >
> >
> > 4 disk raid5 (raid1 metadata) on the same server using the same model
> > disks as above:
> >
> > UUID:             b75ee8b5-ae1c-4395-aa39-bebf10993057
> > Scrub started:    Wed Nov 27 07:32:46 2019
> > Status:           running
> > Duration:         7:34:50
> > Time left:        1:52:37
> > ETA:              Wed Nov 27 17:00:18 2019
> > Total to scrub:   1.20TiB
> > Bytes scrubbed:   982.05GiB
> > Rate:             36.85MiB/s
> > Error summary:    no errors found
> >
> >
> >
> > 6 SSD raid5 (raid1 metadata) also on the same server, still slow for
> > SSDs but at least scrub performance is acceptable:
> >
> > UUID:             e072aa60-33e2-4756-8496-c58cd8ba6053
> > Scrub started:    Wed Nov 27 15:08:31 2019
> > Status:           running
> > Duration:         0:01:40
> > Time left:        1:40:11
> > ETA:              Wed Nov 27 16:50:24 2019
> > Total to scrub:   3.24TiB
> > Bytes scrubbed:   54.37GiB
> > Rate:             556.73MiB/s
> > Error summary:    no errors found
> >
> > I still have some reservations about btrfs raid5/6, so use mostly for
> > smaller filesystems for now, but this slow scrub performance will
> > result in multi-day scrubs for a large filesystem, which isn't very
> > practical.
>
> Btrfs uses a not-so-optimal way for multi-disks scrub:
> Queuing scrub for each disk at the same time.
>
> So it's common to cause a lot of race and even conflicting seek requests.
>
> Have you tried to only scrub one disk for such case?
>
> Thanks,
> Qu
>
> >
> > Thanks,
> > Jorge
> >
>

  reply	other threads:[~2019-11-28  9:25 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-27 15:11 RAID5 scrub performance Jorge Bastos
2019-11-28  0:01 ` Qu Wenruo
2019-11-28  9:24   ` Jorge Bastos [this message]
2019-12-15 12:20   ` Jorge Bastos

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAHzMYBS5asoCqa-DCjutED69SyvXVx+ht7x_QsJZyJTNZUcOiQ@mail.gmail.com \
    --to=jorge.mrbastos@gmail.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=quwenruo.btrfs@gmx.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).