From: Dave Chinner <david@fromorbit.com>
To: Emmanuel Florac <eflorac@intellique.com>
Cc: Stan Hoeppner <stan@hardwarefreak.com>, xfs@oss.sgi.com
Subject: Re: Verify filesystem is aligned to stripes
Date: Fri, 26 Nov 2010 23:22:18 +1100 [thread overview]
Message-ID: <20101126122218.GH12187@dastard> (raw)
In-Reply-To: <20101126091622.264830fa@galadriel.home>
On Fri, Nov 26, 2010 at 09:16:22AM +0100, Emmanuel Florac wrote:
> Le Thu, 25 Nov 2010 16:57:00 -0600 vous écriviez:
>
> > Looking at the stripe size, which is equal to 64 sectors per array
> > member drive (448 sectors total), how exactly is a sub 4KB mail file
> > (8 sectors) going to be split up into equal chunks across a 224KB RAID
> > stripe?
>
> It won't, it will simply end on one drive (actually one mirror).
> However because the mirrors are striped together, all drives in the
> array will be sollicited in my experience, that's why you need at least
> as many writing threads as there are stripes to reach the top IOPS. In
> your case, writing 56 4K files simultaneously will effectively write on
> all drives at once, hopefully (depends upon the filesystem allocation
> policy though).
>
> > Does 220KB of the stripe merely get wasted?
>
> It's not wasted, it just remains unallocated. What's wasted is
> potential IO performance.
No, that's wrong. I don't have the time to explain the intricacies
of how XFS packs small files together, but it does. You can observe
the result by unpacking a kernel tarball and looking at the layout
with xfs_bmap if you really want to...
FWIW, for workloads that do random, small IO, XFS works best when you
_turn off_ aligned allocation and just let it spray the IO at the
disks. This works best if you are using RAID 0/1/10. All the numbers
I've been posting are with aligned allocation turned off (i.e. no
sunit/swidth set).
> What appears from the benchmarks I ran along the year is that anyway
> you turn it, whatever caching, command tag queuing and reordering
> your're using, a single thread can't reach maximal IOPS throughput on
> an array, i. e. writing on all drives simultaneously; a single thread
> writing to the fastest RAID 10 with 4K or 8K IOs can't do much better
> than with a single drive, 200 to 300 IOPS for a 15k drive.
Assuming synchronous IO. If you are doing async IO, a single CPU
should be able to keep hundreds of SRDs (Spinning Rust Disks) busy...
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2010-11-26 12:20 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-11-24 18:39 Verify filesystem is aligned to stripes Spelic
2010-11-25 5:46 ` Dave Chinner
2010-11-25 7:00 ` Stan Hoeppner
2010-11-25 10:15 ` Dave Chinner
2010-11-25 22:57 ` Stan Hoeppner
2010-11-26 8:16 ` Emmanuel Florac
2010-11-26 12:22 ` Dave Chinner [this message]
2010-11-26 13:15 ` Spelic
2010-11-26 14:05 ` Michael Monnerie
2010-11-26 14:36 ` Emmanuel Florac
-- strict thread matches above, loose matches on Subject: below --
2010-11-26 2:43 Richard Scobie
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20101126122218.GH12187@dastard \
--to=david@fromorbit.com \
--cc=eflorac@intellique.com \
--cc=stan@hardwarefreak.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox