* storage controllers for use with NFS+BtrFs @ 2015-01-19 21:23 Daniel Pocock 2015-01-20 13:25 ` Benjamin Coddington 0 siblings, 1 reply; 4+ messages in thread From: Daniel Pocock @ 2015-01-19 21:23 UTC (permalink / raw) To: linux-nfs I've been looking into the issue of which storage controllers are suitable for use with NFS + BtrFs (or NFS + ZFS) and put some comments about it on my blog[1] today. I understand that for NFS it is generally desirable to have non-volatile write cache if you want good write performance. On the other hand, self-healing file systems (BtrFs and ZFS) like having direct access to disks and those RAID cards with caches don't always give the same level of access to the volume. Can anybody give any practical suggestions about how to reconcile these requirements and experience good NFS write performance onto these filesystems given the type of HBA and RAID cards available? 1. http://danielpocock.com/storage-controllers-for-small-linux-nfs-networks ^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: storage controllers for use with NFS+BtrFs 2015-01-19 21:23 storage controllers for use with NFS+BtrFs Daniel Pocock @ 2015-01-20 13:25 ` Benjamin Coddington 2015-01-20 13:38 ` Daniel Pocock 0 siblings, 1 reply; 4+ messages in thread From: Benjamin Coddington @ 2015-01-20 13:25 UTC (permalink / raw) To: Daniel Pocock; +Cc: linux-nfs Hi Daniel, On Mon, 19 Jan 2015, Daniel Pocock wrote: > I've been looking into the issue of which storage controllers are > suitable for use with NFS + BtrFs (or NFS + ZFS) and put some comments > about it on my blog[1] today. > > I understand that for NFS it is generally desirable to have non-volatile > write cache if you want good write performance. > > On the other hand, self-healing file systems (BtrFs and ZFS) like having > direct access to disks and those RAID cards with caches don't always > give the same level of access to the volume. > > Can anybody give any practical suggestions about how to reconcile these > requirements and experience good NFS write performance onto these > filesystems given the type of HBA and RAID cards available? I don't think that reconciling these requirements is going to necessarily equal good NFS write performance. You've got to define what good means. It sounds like you want fast commits, but how fast, how many, what size? If you're building on ZFS, best thing would be to find a very fast ZIL device, but if you're already building pools on SSD to get any gain you'd need something really fast like DDR. A ramdrive ZIL might be a nice way to test that on your setup before spending anything. Many BBU-d RAID controllers allow JBOD modes that still slice up their cache for writes to disk, for example check out megacli's "-CfgEachDskRaid0". Ben > > 1. http://danielpocock.com/storage-controllers-for-small-linux-nfs-networks > -- > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > ^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: storage controllers for use with NFS+BtrFs 2015-01-20 13:25 ` Benjamin Coddington @ 2015-01-20 13:38 ` Daniel Pocock 2015-01-20 14:12 ` Benjamin Coddington 0 siblings, 1 reply; 4+ messages in thread From: Daniel Pocock @ 2015-01-20 13:38 UTC (permalink / raw) To: Benjamin Coddington; +Cc: linux-nfs On 20/01/15 14:25, Benjamin Coddington wrote: > Hi Daniel, > > On Mon, 19 Jan 2015, Daniel Pocock wrote: >> I've been looking into the issue of which storage controllers are >> suitable for use with NFS + BtrFs (or NFS + ZFS) and put some comments >> about it on my blog[1] today. >> >> I understand that for NFS it is generally desirable to have non-volatile >> write cache if you want good write performance. >> >> On the other hand, self-healing file systems (BtrFs and ZFS) like having >> direct access to disks and those RAID cards with caches don't always >> give the same level of access to the volume. >> >> Can anybody give any practical suggestions about how to reconcile these >> requirements and experience good NFS write performance onto these >> filesystems given the type of HBA and RAID cards available? > I don't think that reconciling these requirements is going to necessarily > equal good NFS write performance. You've got to define what good means. > It sounds like you want fast commits, but how fast, how many, what size? More specific example of the use-case will probably answer that: - consider a home network or small office, less than 10 users - aim to improve the performance of tasks such as: - unzipping source tarballs with many files in them - switching between branches in Git when many files change - compiling large projects with many object files or Java class files to be written and/or building packages For compiling, I can obviously generate my object files on tmpfs or some other workaround to get performance. > If you're building on ZFS, best thing would be to find a very fast ZIL > device, but if you're already building pools on SSD to get any gain you'd > need something really fast like DDR. A ramdrive ZIL might be a nice way to > test that on your setup before spending anything. > > Many BBU-d RAID controllers allow JBOD modes that still slice up their > cache for writes to disk, for example check out megacli's > "-CfgEachDskRaid0". Thanks for that feedback I had been using BtrFs and md at present (1TB for each) and I'm thinking about going up to 4TB or 6TB and deciding whether to use BtrFs or ZFS on most of it. Regards, Daniel ^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: storage controllers for use with NFS+BtrFs 2015-01-20 13:38 ` Daniel Pocock @ 2015-01-20 14:12 ` Benjamin Coddington 0 siblings, 0 replies; 4+ messages in thread From: Benjamin Coddington @ 2015-01-20 14:12 UTC (permalink / raw) To: Daniel Pocock; +Cc: linux-nfs On Tue, 20 Jan 2015, Daniel Pocock wrote: > On 20/01/15 14:25, Benjamin Coddington wrote: > > Hi Daniel, > > > > On Mon, 19 Jan 2015, Daniel Pocock wrote: > >> I've been looking into the issue of which storage controllers are > >> suitable for use with NFS + BtrFs (or NFS + ZFS) and put some comments > >> about it on my blog[1] today. > >> > >> I understand that for NFS it is generally desirable to have non-volatile > >> write cache if you want good write performance. > >> > >> On the other hand, self-healing file systems (BtrFs and ZFS) like having > >> direct access to disks and those RAID cards with caches don't always > >> give the same level of access to the volume. > >> > >> Can anybody give any practical suggestions about how to reconcile these > >> requirements and experience good NFS write performance onto these > >> filesystems given the type of HBA and RAID cards available? > > I don't think that reconciling these requirements is going to necessarily > > equal good NFS write performance. You've got to define what good means. > > It sounds like you want fast commits, but how fast, how many, what size? > More specific example of the use-case will probably answer that: > - consider a home network or small office, less than 10 users > - aim to improve the performance of tasks such as: > - unzipping source tarballs with many files in them > - switching between branches in Git when many files change > - compiling large projects with many object files or Java class files > to be written > and/or building packages > > For compiling, I can obviously generate my object files on tmpfs or some > other workaround to get performance. It looks like you have reproduceable workload - that's going to make your testing much easier. > > If you're building on ZFS, best thing would be to find a very fast ZIL > > device, but if you're already building pools on SSD to get any gain you'd > > need something really fast like DDR. A ramdrive ZIL might be a nice way to > > test that on your setup before spending anything. > > > > Many BBU-d RAID controllers allow JBOD modes that still slice up their > > cache for writes to disk, for example check out megacli's > > "-CfgEachDskRaid0". > > Thanks for that feedback > > I had been using BtrFs and md at present (1TB for each) and I'm thinking > about going up to 4TB or 6TB and deciding whether to use BtrFs or ZFS on > most of it. If you make discoveries, let me know what you find out. Ben ^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2015-01-20 14:12 UTC | newest] Thread overview: 4+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2015-01-19 21:23 storage controllers for use with NFS+BtrFs Daniel Pocock 2015-01-20 13:25 ` Benjamin Coddington 2015-01-20 13:38 ` Daniel Pocock 2015-01-20 14:12 ` Benjamin Coddington
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox