linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Chris Murphy <lists@colorremedies.com>
To: waxhead <waxhead@dirtcellar.net>
Cc: Chris Murphy <lists@colorremedies.com>,
	Cloud Admin <admin@cloud.haefemeier.eu>,
	Btrfs BTRFS <linux-btrfs@vger.kernel.org>
Subject: Re: Best Practice: Add new device to RAID1 pool
Date: Mon, 24 Jul 2017 15:20:59 -0600	[thread overview]
Message-ID: <CAJCQCtTnShjzumppLPYWzc3nXfKoTads91dtfLk8UoQ3w4_P+Q@mail.gmail.com> (raw)
In-Reply-To: <aac851d3-4187-fc94-bdcb-e79b83b74da8@dirtcellar.net>

On Mon, Jul 24, 2017 at 3:12 PM, waxhead <waxhead@dirtcellar.net> wrote:
>
>
> Chris Murphy wrote:
>>
>> On Mon, Jul 24, 2017 at 5:27 AM, Cloud Admin <admin@cloud.haefemeier.eu>
>> wrote:
>>
>>> I am a little bit confused because the balance command is running since
>>> 12 hours and only 3GB of data are touched.
>>
>> That's incredibly slow. Something isn't right.
>>
>> Using btrfs-debug -b from btrfs-progs, I've selected a few 100% full
>> chunks.
>>
>> [156777.077378] f26s.localdomain sudo[13757]:    chris : TTY=pts/2 ;
>> PWD=/home/chris ; USER=root ; COMMAND=/sbin/btrfs balance start
>> -dvrange=157970071552..159043813376 /
>> [156773.328606] f26s.localdomain kernel: BTRFS info (device sda1):
>> relocating block group 157970071552 flags data
>> [156800.408918] f26s.localdomain kernel: BTRFS info (device sda1):
>> found 38952 extents
>> [156861.343067] f26s.localdomain kernel: BTRFS info (device sda1):
>> found 38951 extents
>>
>> That 1GiB chunk with quite a few fragments took 88s. That's 11MB/s.
>> Even for a hard drive, that's slow. I
>
> This may be a stupid question , but are your pool of butter (or BTRFS pool)
> by any chance hooked up via USB? If this is USB2.0 at 480mitb/s then it is
> about 57MB/s / 4 drives = roughly 14.25 or about 11MB/s if you shave off
> some overhead.
>

Nope, USB 3. Typically on scrubs I get 110MB/s that winds down to
60MB/s as it progresses to the slow parts of the disk.


-- 
Chris Murphy

  reply	other threads:[~2017-07-24 21:21 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-07-24 11:27 Best Practice: Add new device to RAID1 pool Cloud Admin
2017-07-24 13:46 ` Austin S. Hemmelgarn
2017-07-24 14:08   ` Roman Mamedov
2017-07-24 16:42     ` Cloud Admin
2017-07-24 14:12   ` Cloud Admin
2017-07-24 14:25     ` Austin S. Hemmelgarn
2017-07-24 16:40       ` Cloud Admin
2017-07-29 23:04         ` Best Practice: Add new device to RAID1 pool (Summary) Cloud Admin
2017-07-31 11:52           ` Austin S. Hemmelgarn
2017-07-24 20:35 ` Best Practice: Add new device to RAID1 pool Chris Murphy
2017-07-24 20:42   ` Hugo Mills
2017-07-24 20:55     ` Chris Murphy
2017-07-24 21:00       ` Hugo Mills
2017-07-24 21:17       ` Adam Borowski
2017-07-24 23:18         ` Chris Murphy
2017-07-25 17:56     ` Cloud Admin
2017-07-24 21:12   ` waxhead
2017-07-24 21:20     ` Chris Murphy [this message]
2017-07-25  2:22       ` Marat Khalili
2017-07-25  8:13         ` Chris Murphy
2017-07-25 17:46     ` Cloud Admin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAJCQCtTnShjzumppLPYWzc3nXfKoTads91dtfLk8UoQ3w4_P+Q@mail.gmail.com \
    --to=lists@colorremedies.com \
    --cc=admin@cloud.haefemeier.eu \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=waxhead@dirtcellar.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).