From: Dave Chinner <david@fromorbit.com>
To: Brian Foster <bfoster@redhat.com>
Cc: "xfs@oss.sgi.com" <xfs@oss.sgi.com>,
Stefan Priebe <s.priebe@profihost.ag>
Subject: Re: Is XFS suitable for 350 million files on 20TB storage?
Date: Sun, 7 Sep 2014 08:56:54 +1000 [thread overview]
Message-ID: <20140906225654.GB9955@dastard> (raw)
In-Reply-To: <20140906150412.GB23506@bfoster.bfoster>
On Sat, Sep 06, 2014 at 11:04:13AM -0400, Brian Foster wrote:
> On Sat, Sep 06, 2014 at 09:35:15AM +0200, Stefan Priebe wrote:
> > Hi Dave,
> >
> > Am 06.09.2014 01:05, schrieb Dave Chinner:
> > >On Fri, Sep 05, 2014 at 02:40:32PM +0200, Stefan Priebe - Profihost AG wrote:
> > >>
> > >>Am 05.09.2014 um 14:30 schrieb Brian Foster:
> > >>>On Fri, Sep 05, 2014 at 11:47:29AM +0200, Stefan Priebe - Profihost AG wrote:
> > >>>>Hi,
> > >>>>
> > >>>>i have a backup system running 20TB of storage having 350 million files.
> > >>>>This was working fine for month.
> > >>>>
> > >>>>But now the free space is so heavily fragmented that i only see the
> > >>>>kworker with 4x 100% CPU and write speed beeing very slow. 15TB of the
> > >>>>20TB are in use.
> > >
> > >What does perf tell you about the CPU being burnt? (i.e run perf top
> > >for 10-20s while that CPU burn is happening and paste the top 10 CPU
> > >consuming functions).
> >
> > here we go:
> > 15,79% [kernel] [k] xfs_inobt_get_rec
> > 14,57% [kernel] [k] xfs_btree_get_rec
> > 10,37% [kernel] [k] xfs_btree_increment
> > 7,20% [kernel] [k] xfs_btree_get_block
> > 6,13% [kernel] [k] xfs_btree_rec_offset
> > 4,90% [kernel] [k] xfs_dialloc_ag
> > 3,53% [kernel] [k] xfs_btree_readahead
> > 2,87% [kernel] [k] xfs_btree_rec_addr
> > 2,80% [kernel] [k] _xfs_buf_find
> > 1,94% [kernel] [k] intel_idle
> > 1,49% [kernel] [k] _raw_spin_lock
> > 1,13% [kernel] [k] copy_pte_range
> > 1,10% [kernel] [k] unmap_single_vma
> >
>
> The top 6 or so items look related to inode allocation, so that probably
> confirms the primary bottleneck as searching around for free inodes out
> of the existing inode chunks, precisely what the finobt is intended to
> resolve. That was introduced in 3.16 kernels, so unfortunately it is not
> available in 3.10.
*nod*
Again, the only workaround for this on a non-finobt fs is to greatly
increase the number of AGs so there's less records in each btree to
search.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2014-09-06 22:56 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-09-05 9:47 Is XFS suitable for 350 million files on 20TB storage? Stefan Priebe - Profihost AG
2014-09-05 12:30 ` Brian Foster
2014-09-05 12:40 ` Stefan Priebe - Profihost AG
2014-09-05 13:48 ` Brian Foster
2014-09-05 18:07 ` Stefan Priebe
2014-09-05 19:18 ` Brian Foster
2014-09-05 20:14 ` Stefan Priebe
2014-09-05 21:24 ` Brian Foster
2014-09-05 22:39 ` Sean Caron
2014-09-05 23:05 ` Dave Chinner
2014-09-06 7:35 ` Stefan Priebe
2014-09-06 15:04 ` Brian Foster
2014-09-06 22:56 ` Dave Chinner [this message]
2014-09-08 8:35 ` Stefan Priebe - Profihost AG
2014-09-08 9:46 ` Dave Chinner
2014-09-08 9:49 ` Stefan Priebe - Profihost AG
2014-09-06 14:51 ` Brian Foster
2014-09-06 22:54 ` Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140906225654.GB9955@dastard \
--to=david@fromorbit.com \
--cc=bfoster@redhat.com \
--cc=s.priebe@profihost.ag \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox