From: Benjamin Coddington <bcodding@redhat.com>
To: Daniel Pocock <daniel@pocock.pro>
Cc: linux-nfs@vger.kernel.org
Subject: Re: storage controllers for use with NFS+BtrFs
Date: Tue, 20 Jan 2015 09:12:20 -0500 (EST) [thread overview]
Message-ID: <alpine.OSX.2.19.9992.1501200906150.498@planck.local> (raw)
In-Reply-To: <54BE5A41.40707@pocock.pro>
On Tue, 20 Jan 2015, Daniel Pocock wrote:
> On 20/01/15 14:25, Benjamin Coddington wrote:
> > Hi Daniel,
> >
> > On Mon, 19 Jan 2015, Daniel Pocock wrote:
> >> I've been looking into the issue of which storage controllers are
> >> suitable for use with NFS + BtrFs (or NFS + ZFS) and put some comments
> >> about it on my blog[1] today.
> >>
> >> I understand that for NFS it is generally desirable to have non-volatile
> >> write cache if you want good write performance.
> >>
> >> On the other hand, self-healing file systems (BtrFs and ZFS) like having
> >> direct access to disks and those RAID cards with caches don't always
> >> give the same level of access to the volume.
> >>
> >> Can anybody give any practical suggestions about how to reconcile these
> >> requirements and experience good NFS write performance onto these
> >> filesystems given the type of HBA and RAID cards available?
> > I don't think that reconciling these requirements is going to necessarily
> > equal good NFS write performance. You've got to define what good means.
> > It sounds like you want fast commits, but how fast, how many, what size?
> More specific example of the use-case will probably answer that:
> - consider a home network or small office, less than 10 users
> - aim to improve the performance of tasks such as:
> - unzipping source tarballs with many files in them
> - switching between branches in Git when many files change
> - compiling large projects with many object files or Java class files
> to be written
> and/or building packages
>
> For compiling, I can obviously generate my object files on tmpfs or some
> other workaround to get performance.
It looks like you have reproduceable workload - that's going to make your
testing much easier.
> > If you're building on ZFS, best thing would be to find a very fast ZIL
> > device, but if you're already building pools on SSD to get any gain you'd
> > need something really fast like DDR. A ramdrive ZIL might be a nice way to
> > test that on your setup before spending anything.
> >
> > Many BBU-d RAID controllers allow JBOD modes that still slice up their
> > cache for writes to disk, for example check out megacli's
> > "-CfgEachDskRaid0".
>
> Thanks for that feedback
>
> I had been using BtrFs and md at present (1TB for each) and I'm thinking
> about going up to 4TB or 6TB and deciding whether to use BtrFs or ZFS on
> most of it.
If you make discoveries, let me know what you find out.
Ben
prev parent reply other threads:[~2015-01-20 14:12 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-01-19 21:23 storage controllers for use with NFS+BtrFs Daniel Pocock
2015-01-20 13:25 ` Benjamin Coddington
2015-01-20 13:38 ` Daniel Pocock
2015-01-20 14:12 ` Benjamin Coddington [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=alpine.OSX.2.19.9992.1501200906150.498@planck.local \
--to=bcodding@redhat.com \
--cc=daniel@pocock.pro \
--cc=linux-nfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox