From: Dave Chinner <david@fromorbit.com>
To: Brian Foster <bfoster@redhat.com>
Cc: "xfs@oss.sgi.com" <xfs@oss.sgi.com>,
Stefan Priebe - Profihost AG <s.priebe@profihost.ag>
Subject: Re: Is XFS suitable for 350 million files on 20TB storage?
Date: Sun, 7 Sep 2014 08:54:24 +1000 [thread overview]
Message-ID: <20140906225424.GA9955@dastard> (raw)
In-Reply-To: <20140906145105.GA23506@bfoster.bfoster>
On Sat, Sep 06, 2014 at 10:51:05AM -0400, Brian Foster wrote:
> On Sat, Sep 06, 2014 at 09:05:28AM +1000, Dave Chinner wrote:
> > On Fri, Sep 05, 2014 at 02:40:32PM +0200, Stefan Priebe - Profihost AG wrote:
> > >
> > > Am 05.09.2014 um 14:30 schrieb Brian Foster:
> > > > On Fri, Sep 05, 2014 at 11:47:29AM +0200, Stefan Priebe - Profihost AG wrote:
> > > >> Hi,
> > > >>
> > > >> i have a backup system running 20TB of storage having 350 million files.
> > > >> This was working fine for month.
> > > >>
> > > >> But now the free space is so heavily fragmented that i only see the
> > > >> kworker with 4x 100% CPU and write speed beeing very slow. 15TB of the
> > > >> 20TB are in use.
> >
> > What does perf tell you about the CPU being burnt? (i.e run perf top
> > for 10-20s while that CPU burn is happening and paste the top 10 CPU
> > consuming functions).
> >
> > > >>
> > > >> Overall files are 350 Million - all in different directories. Max 5000
> > > >> per dir.
> > > >>
> > > >> Kernel is 3.10.53 and mount options are:
> > > >> noatime,nodiratime,attr2,inode64,logbufs=8,logbsize=256k,noquota
> > > >>
> > > >> # xfs_db -r -c freesp /dev/sda1
> > > >> from to extents blocks pct
> > > >> 1 1 29484138 29484138 2,16
> > > >> 2 3 16930134 39834672 2,92
> > > >> 4 7 16169985 87877159 6,45
> > > >> 8 15 78202543 999838327 73,41
> >
> > With an inode size of 256 bytes, this is going to be your real
> > problem soon - most of the free space is smaller than an inode
> > chunk so soon you won't be able to allocate new inodes, even though
> > there is free space on disk.
> >
>
> The extent list here is in fsb units, right? 256b inodes means 16k inode
> chunks, in which case it seems like there's still plenty of room for
> inode chunks (e.g., 8-15 blocks -> 32k-64k).
PEBKAC. My bad.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
prev parent reply other threads:[~2014-09-06 22:54 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-09-05 9:47 Is XFS suitable for 350 million files on 20TB storage? Stefan Priebe - Profihost AG
2014-09-05 12:30 ` Brian Foster
2014-09-05 12:40 ` Stefan Priebe - Profihost AG
2014-09-05 13:48 ` Brian Foster
2014-09-05 18:07 ` Stefan Priebe
2014-09-05 19:18 ` Brian Foster
2014-09-05 20:14 ` Stefan Priebe
2014-09-05 21:24 ` Brian Foster
2014-09-05 22:39 ` Sean Caron
2014-09-05 23:05 ` Dave Chinner
2014-09-06 7:35 ` Stefan Priebe
2014-09-06 15:04 ` Brian Foster
2014-09-06 22:56 ` Dave Chinner
2014-09-08 8:35 ` Stefan Priebe - Profihost AG
2014-09-08 9:46 ` Dave Chinner
2014-09-08 9:49 ` Stefan Priebe - Profihost AG
2014-09-06 14:51 ` Brian Foster
2014-09-06 22:54 ` Dave Chinner [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140906225424.GA9955@dastard \
--to=david@fromorbit.com \
--cc=bfoster@redhat.com \
--cc=s.priebe@profihost.ag \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox