From: Bryan J Smith <b.j.smith@ieee.org>
To: Dave Chinner <david@fromorbit.com>
Cc: Bernhard Schmidt <berni@birkenwald.de>,
"xfs@oss.sgi.com" <xfs@oss.sgi.com>
Subject: Re: Premature "No Space left on device" on XFS
Date: Sat, 08 Oct 2011 02:30:04 -0400 [thread overview]
Message-ID: <f7b04e83-0303-4fa2-9bc0-db42c99228c0@email.android.com> (raw)
In-Reply-To: <20111007233119.GJ3159@dastard>
[-- Attachment #1.1: Type: text/plain, Size: 3140 bytes --]
I figured someone would fill in the gaps in my experience-assumptions. Excellent info, especially on those fs comparisons.
--
Sent from my Android phone with K-9 Mail. Please excuse my brevity.
Dave Chinner <david@fromorbit.com> wrote:
On Fri, Oct 07, 2011 at 06:58:53AM -0700, Bryan J Smith wrote:
> [ Not really adding any technical meat, but just some past
> experience with XFS, plus Ext3 experience ]
>
> I remember running into this a long time ago when I was first
> playing with XFS for /tmp and /var (I was still a Linux/XFS noob
> at the time, not that I'm an expert today). I ran into the same
> case where both free block and inodes were still available
> (although similarly well utilized), and the median file size was
> around 1KiB. It was also in the case of many small files being
> written out in a short period.
>
> In my case, I didn't use the XFS debugger to get into the
> allocation of the extents (would have if I wasn't such a noob,
> good, discrete command to know, thanx!).
>
> Extents are outstanding for data and similar directories, ordering
> and placing large and small files to mitigate fragmentation. But
> in this case, and correct me if I'm wrong, it's really just a
> wasteful use for the extents approach, as the files typically fit
> in a single data block or two.
And single blocks still an extent, so there's nothing "wasted" by
having a single block extent.
....
> I've used Ext3 with around 8 million files with a median size well
> under 4KiB (under 32GiB total). It works "well enough." I'm
> curious how Ext4 would do though. I think Ric Wheeler's team (at
> Red Hat) has done some benchmarks on 7+ figure file counts on Ext3
> and Ext4.
And against XFS, too. In case you didn't realise, you're talking to
the person who ran a large number of those tests. ;)
The results were ext4 is good for create/delete workloads up to 2-4
threads and about 100k files per directory on a decent disk
subsystem (4000 iops). It's much better than ext3, and for those
workloads about 2x as fast as XFS at 1-2 threads. This pattern held
true as long as the disk subsystem could handle the number of iops
that ext4 threw at it. XFS performance came at a much, much lower
iops cost (think order of magnitude), so shoul dbe more consistent
on a wider range of storage hardware than ext4.
However, XFS was about 3x faster on cold cache lookups than ext4, so
if you're workload is dominated by lookups, XFS is definitely the
faster filesystem to use even if creates/unlinks on ext4 are
faster..
As soon as you have more parallelism than 2-4 threads or large
directories, XFS create/unlink speed surpasses ext4 by a large
amount - the best I got out of ext4 was ~80k creates a second, while
XFS topped 130k creates/s at 8 threads. And the lookup speed
differential increase in XFS's favour at larger thread counts as
well.
So it really depends on your workload as to which filesystem will
handle your small files best. Mail spools tend to have lots of
parallelism, which is why XFS works pretty well, even though it is
a small file workload.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
[-- Attachment #1.2: Type: text/html, Size: 3886 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2011-10-08 6:30 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-10-06 19:55 Premature "No Space left on device" on XFS Bernhard Schmidt
2011-10-07 0:22 ` Stan Hoeppner
2011-10-07 0:47 ` Bernhard Schmidt
2011-10-07 1:37 ` Dave Chinner
2011-10-07 8:40 ` Gim Leong Chin
2011-10-07 23:20 ` Dave Chinner
2011-10-07 11:40 ` Michael Monnerie
2011-10-07 23:17 ` Dave Chinner
2011-10-07 13:49 ` Bernhard Schmidt
2011-10-07 23:14 ` Dave Chinner
2011-10-08 12:29 ` Bernhard Schmidt
2011-10-08 13:18 ` Christoph Hellwig
2011-10-08 22:34 ` Dave Chinner
2011-10-09 14:46 ` Christoph Hellwig
2011-10-08 22:30 ` Dave Chinner
2011-10-07 13:58 ` Bryan J Smith
2011-10-07 23:31 ` Dave Chinner
2011-10-08 6:30 ` Bryan J Smith [this message]
2011-10-08 13:16 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f7b04e83-0303-4fa2-9bc0-db42c99228c0@email.android.com \
--to=b.j.smith@ieee.org \
--cc=berni@birkenwald.de \
--cc=david@fromorbit.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox