linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: E V <eliventer@gmail.com>
To: "Ellis H. Wilson III" <ellisw@panasas.com>,
	linux-btrfs <linux-btrfs@vger.kernel.org>
Subject: Re: btrfs-cleaner / snapshot performance analysis
Date: Tue, 13 Feb 2018 08:34:37 -0500	[thread overview]
Message-ID: <CAJtFHUQSypeN4t9bpAXfkEgq2Nj4hck3x+17-jaxuH6osJjNGQ@mail.gmail.com> (raw)
In-Reply-To: <c7e267a4-d47d-9713-c222-81dcb441e009@panasas.com>

On Mon, Feb 12, 2018 at 10:37 AM, Ellis H. Wilson III
<ellisw@panasas.com> wrote:
> On 02/11/2018 01:24 PM, Hans van Kranenburg wrote:
>>
>> Why not just use `btrfs fi du <subvol> <snap1> <snap2>` now and then and
>> update your administration with the results? .. Instead of putting the
>> burden of keeping track of all administration during every tiny change
>> all day long?
>
>
> I will look into that if using built-in group capacity functionality proves
> to be truly untenable.  Thanks!
>
>>> CoW is still valuable for us as we're shooting to support on the order
>>> of hundreds of snapshots per subvolume,
>>
>>
>> Hundreds will get you into trouble even without qgroups.
>
>
> I should have been more specific.  We are looking to use up to a few dozen
> snapshots per subvolume, but will have many (tens to hundreds of) discrete
> subvolumes (each with up to a few dozen snapshots) in a BTRFS filesystem.
> If I have it wrong and the scalability issues in BTRFS do not solely apply
> to subvolumes and their snapshot counts, please let me know.
>
> I will note you focused on my tiny desktop filesystem when making some of
> your previous comments -- this is why I didn't want to share specific
> details.  Our filesystem will be RAID0 with six large HDDs (12TB each).
> Reliability concerns do not apply to our situation for technical reasons,
> but if there are capacity scaling issues with BTRFS I should be made aware
> of, I'd be glad to hear them.  I have not seen any in technical
> documentation of such a limit, and experiments so far on 6x6TB arrays has
> not shown any performance problems, so I'm inclined to believe the only
> scaling issue exists with reflinks.  Correct me if I'm wrong.
>
> Thanks,
>
> ellis
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

When testing btrfs on large volumes, especially with metadata heavy
operations, I'd suggest you match the node size of your mkfs.btrfs
(-n) with the stripe size used in creating your RAID array. Also, use
the ssd_spread mount option as discussed in a previous thread. It make
a big difference on arrays. It allocates much more space for metadata,
but it greatly reduces fragmentation over time.

  parent reply	other threads:[~2018-02-13 13:34 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-02-09 16:45 btrfs-cleaner / snapshot performance analysis Ellis H. Wilson III
2018-02-09 17:10 ` Peter Grandi
2018-02-09 20:36 ` Hans van Kranenburg
2018-02-10 18:29   ` Ellis H. Wilson III
2018-02-10 22:05     ` Tomasz Pala
2018-02-11 15:59       ` Ellis H. Wilson III
2018-02-11 18:24         ` Hans van Kranenburg
2018-02-12 15:37           ` Ellis H. Wilson III
2018-02-12 16:02             ` Austin S. Hemmelgarn
2018-02-12 16:39               ` Ellis H. Wilson III
2018-02-12 18:07                 ` Austin S. Hemmelgarn
2018-02-13 13:34             ` E V [this message]
2018-02-11  1:02     ` Hans van Kranenburg
2018-02-11  9:31       ` Andrei Borzenkov
2018-02-11 17:25         ` Adam Borowski
2018-02-11 16:15       ` Ellis H. Wilson III
2018-02-11 18:03         ` Hans van Kranenburg
2018-02-12 14:45           ` Ellis H. Wilson III
2018-02-12 17:09             ` Hans van Kranenburg
2018-02-12 17:38               ` Ellis H. Wilson III
2018-02-11  6:40 ` Qu Wenruo
2018-02-14  1:14   ` Darrick J. Wong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAJtFHUQSypeN4t9bpAXfkEgq2Nj4hck3x+17-jaxuH6osJjNGQ@mail.gmail.com \
    --to=eliventer@gmail.com \
    --cc=ellisw@panasas.com \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).