public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Chris Dunlop <chris@onthe.net.au>
Cc: linux-xfs@vger.kernel.org
Subject: Re: Extreme fragmentation ho!
Date: Tue, 29 Dec 2020 09:06:22 +1100	[thread overview]
Message-ID: <20201228220622.GA164134@dread.disaster.area> (raw)
In-Reply-To: <20201221215453.GA1886598@onthe.net.au>

On Tue, Dec 22, 2020 at 08:54:53AM +1100, Chris Dunlop wrote:
> Hi,
> 
> I have a 2T file fragmented into 841891 randomly placed extents. It takes
> 4-6 minutes (depending on what else the filesystem is doing) to delete the
> file. This is causing a timeout in the application doing the removal, and
> hilarity ensues.

~3,000 extents/s being removed, with reflink+rmap mods being made for
every extent. Seems a little slow compared to what I typically see,
but...

> The fragmentation is the result of reflinking bits and bobs from other files
> into the subject file, so it's probably unavoidable.
> 
> The file is sitting on XFS on LV on a raid6 comprising 6 x 5400 RPM HDD:

... probably not that unreasonable for pretty much the slowest
storage configuration you can possibly come up with for small,
metadata write intensive workloads.

> # xfs_info /home
> meta-data=/dev/mapper/vg00-home  isize=512    agcount=32, agsize=244184192 blks
>          =                       sectsz=4096  attr=2, projid32bit=1
>          =                       crc=1        finobt=1, sparse=1, rmapbt=1
>          =                       reflink=1
> data     =                       bsize=4096   blocks=7813893120, imaxpct=5
>          =                       sunit=128    swidth=512 blks
> naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
> log      =internal log           bsize=4096   blocks=521728, version=2
>          =                       sectsz=4096  sunit=1 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0
> 
> I'm guessing the time taken to remove is not unreasonable given the speed of
> the underlying storage and the amount of metadata involved. Does my guess
> seem correct?

Yup.

> I'd like to do some experimentation with a facsimile of this file, e.g.  try
> the remove on different storage subsystems, and/or with a external fast
> journal etc., to see how they compare.

I think you'll find a limit at ~20,000 extents/s, regardless of your
storage subsystem. Once you take away IO latency, it's basically
single threaded and CPU bound so performance is largely dependent
on how fast your CPUs are. IOWs, the moment you move to SSDs, it
will be CPU bound and still take a minute or two to remove all the
extents....

> What is the easiest way to recreate a similarly (or even better,
> identically) fragmented file?

Just script xfs_io to reflink random bits and bobs from other files
into a larger file?

> One way would be to use xfs_metadump / xfs_mdrestore to create an entire
> copy of the original filesystem, but I'd really prefer not taking the
> original fs offline for the time required. I also don't have the space to
> restore the whole fs but perhaps using lvmthin can address the restore
> issue, at the cost of a slight(?) performance impact due to the extra layer.

Easiest, most space efficient way is to mdrestore to a file (ends up
sparse, containing only metadata), mount it via loopback.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

  parent reply	other threads:[~2020-12-28 23:12 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-21 21:54 Extreme fragmentation ho! Chris Dunlop
2020-12-22 13:03 ` Brian Foster
2020-12-28 22:06 ` Dave Chinner [this message]
2020-12-30  6:28   ` Chris Dunlop
2020-12-30 22:03     ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201228220622.GA164134@dread.disaster.area \
    --to=david@fromorbit.com \
    --cc=chris@onthe.net.au \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox