From: Stan Hoeppner <stan@hardwarefreak.com>
To: David Brown <david@westcontrol.com>
Cc: CoolCold <coolthecold@gmail.com>,
Linux RAID <linux-raid@vger.kernel.org>
Subject: Re: XFS on top RAID10 with odd drives count and 2 near copies
Date: Mon, 13 Feb 2012 07:46:46 -0600 [thread overview]
Message-ID: <4F391446.4060308@hardwarefreak.com> (raw)
In-Reply-To: <4F38CEE2.6000701@westcontrol.com>
On 2/13/2012 2:50 AM, David Brown wrote:
> It is also far from clear whether a linear concat XFS is better than a
> normal XFS on a raid0 of the same drives (or raid1 pairs). I think it
As always the answer depends on the workload. As you correctly stated
above (I snipped it) you'll end up with less head seeks with the linear
array than with the RAID0. How many less depends on the workload,
again, as always.
I need to correct something I stated in my previous post that's relevant
here. I forgot that the per drive read_ahead_kb value is ignored when a
filesystem resides on an md device. Read ahead works at the file
descriptor level, not at the block device level. So when using mdraid
the read_ahead_kb value of the md device is used and the per drive
settings are ignored. Thus kernel read ahead efficiency doesn't suffer
on striped mdraid as I previously stated. Apologies for the error.
> will have lower average latencies on small accesses if you also have big
> reads/writes mixed in, but you will also have lower throughput for
> larger accesses. For some uses, this sort of XFS arrangement is ideal -
> a particular favourite is for mail servers. But I suspect in many other
> cases you will stray enough from the ideal access patterns to lose any
> benefits it might have.
Yeah, if one will definitely have a mixed workload including
reading/writing sufficiently large files (more than a few MB) where
striping would be of benefit, then using RAID0 over mirror would be
better. Once you go there though you may as well go RAID10 with a fast
layout, unless your workload is such that a single md thread eats a CPU.
Then the layered RAID0 over mirror may be a better option.
> Stan is the expert on this, and can give advice on getting the best out
> of XFS. But personally I don't think a linear concat there is the best
> way to go - especially when you want LVM and multiple filesystems on the
> array.
I'm no XFS expert. The experts are the devs. As far as users go, I
probably know some of the XFS internals and theory better than many others.
For the primary workload as stated, XFS over linear is a perfect fit.
WRT doing thin provisioning with virtual machines on this host, using
sparse files to create virtual disks for the VMs and the like, I'm not
sure how well that would work on a linear array with a single XFS
filesystem. As David mentions, I def wouldn't put multiple XFS
filesystems on the array, with or without LVM. This can lead to excess
head seeking, and you don't have the spindle RPM for lots of seeks.
WRT sparse file virtual disks, it would depend alot on the IO access
patterns of the VM guests and their total IO load. If it's minimal then
the XFS + linear would be fine. If the guests do a lot of IO, and their
disk files all end up in the same AG, that wouldn't be so good. Without
more information it's hard to say.
> As another point, since you have mostly read accesses, you should
> probably use raid10,f2 far layout rather than near layout. It's a bit
> slower for writes, but can be much faster for reads.
Near.. far.. whereeeeever you are...
Neil must have watched Titanic just before he came up with these labels. ;)
--
Stan
next prev parent reply other threads:[~2012-02-13 13:46 UTC|newest]
Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-02-10 15:17 XFS on top RAID10 with odd drives count and 2 near copies CoolCold
2012-02-11 4:05 ` Stan Hoeppner
2012-02-11 14:32 ` David Brown
2012-02-12 20:16 ` CoolCold
2012-02-13 8:50 ` David Brown
2012-02-13 9:46 ` CoolCold
2012-02-13 11:19 ` David Brown
2012-02-13 13:46 ` Stan Hoeppner [this message]
2012-02-13 8:54 ` David Brown
2012-02-13 9:49 ` CoolCold
2012-02-13 12:09 ` Stan Hoeppner
2012-02-13 12:42 ` David Brown
2012-02-13 14:46 ` Stan Hoeppner
2012-02-13 21:40 ` CoolCold
2012-02-13 23:02 ` keld
2012-02-14 3:49 ` Stan Hoeppner
2012-02-14 8:58 ` David Brown
2012-02-14 11:38 ` keld
2012-02-14 23:27 ` Stan Hoeppner
2012-02-15 8:30 ` Robin Hill
2012-02-15 13:30 ` Stan Hoeppner
2012-02-15 14:03 ` Robin Hill
2012-02-15 15:40 ` David Brown
2012-02-17 13:16 ` Stan Hoeppner
2012-02-17 14:57 ` David Brown
2012-02-17 19:30 ` Peter Grandi
2012-02-18 13:59 ` David Brown
2012-02-19 14:46 ` Peter Grandi
2012-02-17 19:03 ` Peter Grandi
2012-02-17 22:12 ` Stan Hoeppner
2012-02-18 17:09 ` Peter Grandi
2012-02-15 9:24 ` keld
2012-02-15 12:10 ` David Brown
2012-02-15 13:08 ` keld
2012-02-17 18:44 ` Peter Grandi
2012-02-18 17:39 ` Peter Grandi
2012-02-14 7:31 ` CoolCold
2012-02-14 9:05 ` David Brown
2012-02-14 11:10 ` Stan Hoeppner
2012-02-14 2:49 ` Stan Hoeppner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4F391446.4060308@hardwarefreak.com \
--to=stan@hardwarefreak.com \
--cc=coolthecold@gmail.com \
--cc=david@westcontrol.com \
--cc=linux-raid@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).