From: Dave Chinner <david@fromorbit.com>
To: Dave Hall <kdhall@binghamton.edu>, linux-xfs@vger.kernel.org
Subject: Re: XFS + LVM + DM-Thin + Multi-Volume External RAID
Date: Fri, 25 Nov 2016 06:44:19 +1100 [thread overview]
Message-ID: <20161124194419.GR28177@dastard> (raw)
In-Reply-To: <20161124094332.o3hbaubrligsa7l3@eorzea.usersys.redhat.com>
On Thu, Nov 24, 2016 at 10:43:32AM +0100, Carlos Maiolino wrote:
> Hi,
>
> On Wed, Nov 23, 2016 at 08:23:42PM -0500, Dave Hall wrote:
> > Hello,
> >
> > I'm planning a storage installation on new hardware and I'd like to
> > configure for best performance. I will have 24 to 48 drives in a
> > SAS-attached RAID box with dual 12GB/s controllers (Dell MD3420 with 10K
> > 1.8TB drives. The server is dual socket with 28 cores, 256GB RAM, dual 12GB
> > HBAs, and multiple 10GB NICS.
> >
> > My workload is NFS for user home directories - highly random access patterns
> > with frequent bursts of random writes.
> >
> > In order to maximize performance I'm planning to make multiple small RAID
> > volumes (i.e. RAID5 - 4+1, or RAID6 - 8+2) that would be either striped or
> > concatenated together.
> >
> > I'm looking for information on:
> >
> > - Are there any cautions or recommendations about XFS stability/performance
> > on a thin volume with thin snapshots?
> >
> > - I've read that there are tricks and calculations for aligning XFS to the
> > RAID stripes. Can use suggest any guidelines or tools for calculating the
> > right configuration?
>
> There is no magical trick :), you need to configure Stripe unit and stripe width
> according to your raid configuration. You should set stripe unit (su option) to
> the size of the stripes on your raid, and set the stripe width (sw option)
> according to the number of data disks on your array (if you have a 4+1 raid 5,
> it should be 4, into a 8+2 raid 6, it should be 8).
mkfs.xfs will do this setup automatically on software raid and any
block device that exports the necessary information to set it up.
In general, it's only older/cheaper hardware RAID that you have to
worry about this anymore.
> > - I've read also about tuning the number of allocation groups to reflect the
> > CPU configuration of the server. Any suggestions on this?
> >
>
> Allocation groups can't be bigger than 1TB. Assuming it should reflect your cpu
> configuration is wrong, having too few or too many allocation groups can kill
> your performance, and you also might face some another allocation problems in
> the future, when the filesystem get aged when runnning with very small
> allocation groups.
It also depends on your storage, mostly. SSDs can handle
agcount=NCPUS*2 easily, but for spinning storage this will cause
additional seek loading and slow things down. In this case, the
defaults are best.
> Determining the size of the allocation groups, is a case-by-case approach, and
> it might need some experimenting.
>
> Since you are dealing with thin provisioning devices, I'd be more careful then.
> If you start with a small filesystem, and use the default configuration for
> mkfs, it will give you a number of AGs according to your current block device
> size, which can be a problem in the future when you decide to extend the
> filesystem, AG size can't be changed after you make the filesystem. Make a
> search on xfs list and you will see some reports of performance problems that
> ended up being caused by very small filesystems that were extended later,
> causing it to have lots of AGs.
Yup, rule of thumb is that growing the fs size by an order of
magnitude is fine, growing it by two orders of magnitude will cause
problems.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
next prev parent reply other threads:[~2016-11-24 19:44 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-11-24 1:23 XFS + LVM + DM-Thin + Multi-Volume External RAID Dave Hall
2016-11-24 9:43 ` Carlos Maiolino
2016-11-24 19:44 ` Dave Chinner [this message]
2016-11-25 14:50 ` Dave Hall
2016-11-26 17:52 ` Eric Sandeen
[not found] ` <ca974620-fff5-e26b-897d-c1c62d47cc64@binghamton.edu>
[not found] ` <20161125111814.p7ltczag7akqk3w5@eorzea.usersys.redhat.com>
2016-11-25 15:20 ` Dave Hall
2016-11-25 17:09 ` Dave Hall
2016-11-27 21:56 ` Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20161124194419.GR28177@dastard \
--to=david@fromorbit.com \
--cc=kdhall@binghamton.edu \
--cc=linux-xfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).