linux-xfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Brian Foster <bfoster@redhat.com>
To: "Darrick J. Wong" <djwong@kernel.org>
Cc: Dave Chinner <david@fromorbit.com>, linux-xfs@vger.kernel.org
Subject: Re: [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active
Date: Fri, 19 Mar 2021 10:54:15 -0400	[thread overview]
Message-ID: <YFS7F4eNNC4BHo37@bfoster> (raw)
In-Reply-To: <20210319013430.GO22100@magnolia>

On Thu, Mar 18, 2021 at 06:34:30PM -0700, Darrick J. Wong wrote:
> On Fri, Mar 19, 2021 at 12:05:06PM +1100, Dave Chinner wrote:
> > On Thu, Mar 18, 2021 at 03:19:01PM -0700, Darrick J. Wong wrote:
> > > On Fri, Mar 19, 2021 at 07:55:36AM +1100, Dave Chinner wrote:
> > > > On Thu, Mar 18, 2021 at 12:17:06PM -0400, Brian Foster wrote:
> > > > > perag reservation is enabled at mount time on a per AG basis. The
> > > > > upcoming in-core allocation btree accounting mechanism needs to know
> > > > > when reservation is enabled and that all perag AGF contexts are
> > > > > initialized. As a preparation step, set a flag in the mount
> > > > > structure and unconditionally initialize the pagf on all mounts
> > > > > where at least one reservation is active.
> > > > 
> > > > I'm not sure this is a good idea. AFAICT, this means just about any
> > > > filesystem with finobt, reflink and/or rmap will now typically read
> > > > every AGF header in the filesystem at mount time. That means pretty
> > > > much every v5 filesystem in production...
> > > 
> > > They already do that, because the AG headers are where we store the
> > > btree block counts.
> > 
> > Oh, we're brute forcing AG reservation space? I thought we were
> > doing something smarter than that, because I'm sure this isn't the
> > first time I've mentioned this problem....
> 
> Probably not... :)
> 
> > > > We've always tried to avoid needing to reading all AG headers at
> > > > mount time because that does not scale when we have really large
> > > > filesystems (I'm talking petabytes here). We should only read AG
> > > > headers if there is something not fully recovered during the mount
> > > > (i.e. slow path) and not on every mount.
> > > > 
> > > > Needing to do a few thousand synchonous read IOs during mount makes
> > > > mount very slow, and as such we always try to do dynamic
> > > > instantiation of AG headers...  Testing I've done with exabyte scale
> > > > filesystems (>10^6 AGs) show that it can take minutes for mount to
> > > > run when each AG header needs to be read, and that's on SSDs where
> > > > the individual read latency is only a couple of hundred
> > > > microseconds. On spinning disks that can do 200 IOPS, we're
> > > > potentially talking hours just to mount really large filesystems...
> > > 
> > > Is that with reflink enabled?  Reflink always scans the right edge of
> > > the refcount btree at mount to clean out stale COW staging extents,
> > 
> > Aren't they cleaned up at unmount when the inode is inactivated?
> 
> Yes.  Or when the blockgc timeout expires, or when ENOSPC pushes
> blockgc...
> 
> > i.e. isn't this something that should only be done on a unclean
> > mount?
> 
> Years ago (back when reflink was experimental) we left it that way so
> that if there were any serious implementation bugs we wouldn't leak
> blocks everywhere.  I think we forgot to take it out.
> 
> > > and
> > > (prior to the introduction of the inode btree counts feature last year)
> > > we also ahad to walk the entire finobt to find out how big it is.
> > 
> > ugh, I forgot about the fact we had to add that wart because we
> > screwed up the space reservations for finobt operations...
> 
> Yeah.
> 
> > As for large scale testing, I suspect I turned everything optional
> > off when I last did this testing, because mkfs currently requires a
> > lot of per-AG IO to initialise structures. On an SSD, mkfs.xfs
> > -K -f -d agcount=10000 ... takes
> > 
> > 		mkfs time	mount time
> > -m crc=0	15s		1s
> > -m rmapbt=1	25s		6s
> > 
> > Multiply those times by at another 1000 to get to an 8EB
> > filesystem and the difference is several hours of mkfs time and
> > a couple of hours of mount time....
> > 
> > So from the numbers, it is pretty likely I didn't test anything that
> > actually required iterating 8 million AGs at mount time....
> > 
> > > TBH I think the COW recovery and the AG block reservation pieces are
> > > prime candidates for throwing at an xfs_pwork workqueue so we can
> > > perform those scans in parallel.
> 
> [This didn't turn out to be difficult at all.]
> 
> > As I mentioned on #xfs, I think we only need to do the AG read if we
> > are near enospc. i.e. we can take the entire reservation at mount
> > time (which is fixed per-ag) and only take away the used from the
> > reservation (i.e. return to the free space pool) when we actually
> > access the AGF/AGI the first time. Or when we get a ENOSPC
> > event, which might occur when we try to take the fixed reservation
> > at mount time...
> 
> <nod> That's probably not hard.  Compute the theoretical maximum size of
> the finobt/rmapbt/refcountbt, multiply that by the number of AGs, try to
> reserve that much, and if we get it, we can trivially initialise the
> per-AG reservation structure.  If that fails, we fall back to the
> scanning thing we do now:
> 

Even in the failure case, we might be able to limit the mount time
scanning that takes place by just scanning until we've found enough AGs
with consumed reservation such that the mount time estimated reservation
succeeds. Of course, the worst case would always be a full scan (either
due to -ENOSPC on the res or very shortly after mount since the res
might have left the fs near -ENOSPC) so it might not be worth it unless
there's value and the logic is simple..

Brian

> When we set pag[if]_init in the per-AG structure, we can back off the
> space reservation by the number of blocks in the trees tracked by that
> AG header, which will add that quantity to fdblocks.  We can handle the
> ENOSPC case by modifying the per-AG blockgc worker to load the AGF/AGI
> if they aren't already.
> 
> > > > Hence I don't think that any algorithm that requires reading every
> > > > AGF header in the filesystem at mount time on every v5 filesystem
> > > > already out there in production (because finobt triggers this) is a
> > > > particularly good idea...
> > > 
> > > Perhaps not, but the horse bolted 5 years ago. :/
> > 
> > Let's go catch it :P
> 
> FWIW I previously fixed the rmapbt/reflink transaction reservations
> being unnecessarily large, so (provided deferred inode inactivation gets
> reviewed this cycle) I can try to put all these reflink cleanups
> together for the next cycle.
> 
> --D
> 
> > 
> > Cheers,
> > 
> > Dave.
> > -- 
> > Dave Chinner
> > david@fromorbit.com
> 


  reply	other threads:[~2021-03-19 14:55 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-18 16:17 [PATCH v3 0/2] xfs: set aside allocation btree blocks from block reservation Brian Foster
2021-03-18 16:17 ` [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active Brian Foster
2021-03-18 20:55   ` Dave Chinner
2021-03-18 22:19     ` Darrick J. Wong
2021-03-19  1:05       ` Dave Chinner
2021-03-19  1:34         ` Darrick J. Wong
2021-03-19 14:54           ` Brian Foster [this message]
2021-03-19  1:43         ` Dave Chinner
2021-03-19  1:48           ` Darrick J. Wong
2021-03-19  2:08             ` Dave Chinner
2021-03-19 14:54           ` Brian Foster
2021-03-23 22:40             ` Dave Chinner
2021-03-24 14:24               ` Brian Foster
2021-03-18 16:17 ` [PATCH v3 2/2] xfs: set aside allocation btree blocks from block reservation Brian Foster
2021-03-18 20:31   ` Darrick J. Wong
2021-03-19 15:00     ` Brian Foster
2021-03-27  1:34       ` Darrick J. Wong
2021-03-27 14:51         ` Brian Foster
2021-04-09 14:17 ` [PATCH v3 0/2] " Brian Foster
  -- strict thread matches above, loose matches on Subject: below --
2021-04-12 13:30 [PATCH v3 REPOST " Brian Foster
2021-04-12 13:30 ` [PATCH v3 1/2] xfs: set a mount flag when perag reservation is active Brian Foster
2021-04-14  0:49   ` Darrick J. Wong
2021-04-20 16:22     ` Brian Foster
2021-04-20 16:23     ` Brian Foster

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YFS7F4eNNC4BHo37@bfoster \
    --to=bfoster@redhat.com \
    --cc=david@fromorbit.com \
    --cc=djwong@kernel.org \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).