linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: ein <ein.net@gmail.com>
To: Duncan <1i5t5.duncan@cox.net>, linux-btrfs@vger.kernel.org
Subject: Re: number of subvolumes
Date: Fri, 01 Sep 2017 12:21:18 +0200	[thread overview]
Message-ID: <59A9349E.1000302@gmail.com> (raw)
In-Reply-To: <pan$30aee$f76230b$532de4a7$4d3a9985@cox.net>

On 08/31/2017 06:18 PM, Duncan wrote:
[...]
> Michał Sokołowski posted on Thu, 31 Aug 2017 16:38:14 +0200 as excerpted:
>> Is there another tool to verify fragments number of given file when
>> using compression?
> AFAIK there isn't an official one, tho someone posted a script (python, 
> IIRC) at one point and may repost it here.
>
> You can actually get the information needed from filefrag -v (and the 
> script does), but it takes a bit more effort than usual, scripted or 
> brain-power, to convert the results into real fragmentation numbers.
>
> The problem is that btrfs compression works in 128 KiB blocks, and 
> filefrag sees each of those as a fragment.  The extra effort involves 
> checking the addresses of the reported 128 KiB blocks to see if they are 
> actually contiguous, that is, one starts just after the previous one 
> ends.  If so it's actually not fragmented at that point.  But if the 
> addresses aren't contiguous, there's fragmentation at that point.
>
> I don't personally worry too much about it here, for two reasons.  First, 
> I /always/ run with the autodefrag mount option, which keeps 
> fragmentation manageable in any case[1], and second, I'm on ssd, where 
> the effects of fragmentation aren't as pronounced.  (On spinning rust 
> it's generally the seek times that dominate.  On ssds that's 0, but 
> there's still an IOPS cost.)
>
> So while I've run filefrag -v and looked at the results a few times out 
> of curiousity, and indeed could see the reported fragmentation that was 
> actually contiguous, it was simply a curiosity to me, thus my not 
> grabbing that script and putting it to immediate use.
>
> ---
> [1] AFAIK autodefrag checks fragmentation on writes, and rewrites 16 MiB 
> blocks if necessary.  If like me you always run it from the moment you 
> start putting data on the filesystem, that should work pretty well.  If 
> however you haven't been running it or doing manual defrag, because 
> defrag only works on writes and the free space may be fragmented enough 
> there's not 16 MiB blocks to write into, it may take awhile to "catch 
> up", and of course won't defrag anything that's never written to again, 
> but is often reread, making its existing fragmentation an issue.

Very comprehensive, thank you. I was asking because I'd like to learn
how really random writes by VM affects BTRFS (vs XFS,Ext4) performance
and try to develop some workaround to reduce/prevent it while having
csums, cow (snapshots) and compression.

  reply	other threads:[~2017-09-01 10:21 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-08-22 13:22 netapp-alike snapshots? Ulli Horlacher
2017-08-22 13:44 ` Peter Becker
2017-08-22 14:24   ` Ulli Horlacher
2017-08-22 16:08     ` Peter Becker
2017-08-22 16:48       ` Ulli Horlacher
2017-08-22 16:45     ` Roman Mamedov
2017-08-22 16:57       ` Ulli Horlacher
2017-08-22 17:19         ` A L
2017-08-22 18:01           ` Ulli Horlacher
2017-08-22 18:36             ` Peter Grandi
2017-08-22 20:48               ` Ulli Horlacher
2017-08-23  7:18                 ` number of subvolumes Ulli Horlacher
2017-08-23  8:37                   ` A L
2017-08-23 16:48                     ` Ferry Toth
2017-08-24 17:45                       ` Peter Grandi
2017-08-31  6:49                         ` Ulli Horlacher
2017-08-31 11:18                           ` Austin S. Hemmelgarn
2017-08-31 14:38                             ` Michał Sokołowski
2017-08-31 16:18                               ` Duncan
2017-09-01 10:21                                 ` ein [this message]
2017-09-01 11:47                                   ` Austin S. Hemmelgarn
2017-08-24 19:40                       ` Marat Khalili
2017-08-24 21:56                         ` Ferry Toth
2017-08-25  5:54                           ` Chris Murphy
2017-08-25 11:45                           ` Austin S. Hemmelgarn
2017-08-25 12:55                             ` Ferry Toth
2017-08-25 19:18                               ` Austin S. Hemmelgarn
2017-08-23 12:11                   ` Peter Grandi
2017-08-22 21:53               ` user snapshots Ulli Horlacher
2017-08-23  6:28                 ` Dmitrii Tcvetkov
2017-08-23  7:16                   ` Dmitrii Tcvetkov
2017-08-23  7:20                     ` Ulli Horlacher
2017-08-23 11:42                       ` Peter Grandi
2017-08-23 21:13                         ` Ulli Horlacher
2017-08-25 11:28                           ` Austin S. Hemmelgarn
2017-08-22 17:36         ` netapp-alike snapshots? Roman Mamedov
2017-08-22 18:10           ` Ulli Horlacher
2017-09-09 13:26 ` Ulli Horlacher
2017-09-09 13:36   ` Marc MERLIN
2017-09-09 13:44     ` Ulli Horlacher
2017-09-09 19:43       ` Andrei Borzenkov
2017-09-09 19:52         ` Ulli Horlacher
2017-09-10  7:10           ` A L
2017-09-10 14:54         ` Marc MERLIN

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=59A9349E.1000302@gmail.com \
    --to=ein.net@gmail.com \
    --cc=1i5t5.duncan@cox.net \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).