From: Daniel Pocock <daniel@pocock.com.au>
To: "J. Bruce Fields" <bfields@fieldses.org>
Cc: "Myklebust, Trond" <Trond.Myklebust@netapp.com>,
"linux-nfs@vger.kernel.org" <linux-nfs@vger.kernel.org>
Subject: Re: extremely slow nfs when sync enabled
Date: Tue, 08 May 2012 12:06:59 +0000 [thread overview]
Message-ID: <4FA90C63.7000505@pocock.com.au> (raw)
In-Reply-To: <20120507171759.GA10137@fieldses.org>
On 07/05/12 17:18, J. Bruce Fields wrote:
> On Mon, May 07, 2012 at 01:59:42PM +0000, Daniel Pocock wrote:
>>
>>
>> On 07/05/12 09:19, Daniel Pocock wrote:
>>>
>>>>> Ok, so the combination of:
>>>>>
>>>>> - enable writeback with hdparm
>>>>> - use ext4 (and not ext3)
>>>>> - barrier=1 and data=writeback? or data=?
>>>>>
>>>>> - is there a particular kernel version (on either client or server side)
>>>>> that will offer more stability using this combination of features?
>>>>
>>>> Not that I'm aware of. As long as you have a kernel > 2.6.29, then LVM
>>>> should work correctly. The main problem is that some SATA hardware tends
>>>> to be buggy, defeating the methods used by the barrier code to ensure
>>>> data is truly on disk. I believe that XFS will therefore actually test
>>>> the hardware when you mount with write caching and barriers, and should
>>>> report if the test fails in the syslogs.
>>>> See http://xfs.org/index.php/XFS_FAQ#Write_barrier_support.
>>>>
>>>>> I think there are some other variations of my workflow that I can
>>>>> attempt too, e.g. I've contemplated compiling C++ code onto a RAM disk
>>>>> because I don't need to keep the hundreds of object files.
>>>>
>>>> You might also consider using something like ccache and set the
>>>> CCACHE_DIR to a local disk if you have one.
>>>>
>>>
>>>
>>> Thanks for the feedback about these options, I am going to look at these
>>> strategies more closely
>>>
>>
>>
>> I decided to try and take md and LVM out of the picture, I tried two
>> variations:
>>
>> a) the boot partitions are not mirrored, so I reformatted one of them as
>> ext4,
>> - enabled write-cache for the whole of sdb,
>> - mounted ext4, barrier=1,data=ordered
>> - and exported this volume over NFS
>>
>> unpacking a large source tarball on this volume, iostat reports write
>> speeds that are even slower, barely 300kBytes/sec
>
> How many file creates per second?
>
I ran:
nfsstat -s -o all -l -Z5
and during the test (unpacking the tarball), I see numbers like these
every 5 seconds for about 2 minutes:
nfs v3 server total: 319
------------- ------------- --------
nfs v3 server getattr: 1
nfs v3 server setattr: 126
nfs v3 server access: 6
nfs v3 server write: 61
nfs v3 server create: 61
nfs v3 server mkdir: 3
nfs v3 server commit: 61
I decided to expand the scope of my testing too, I want to rule out the
possibility that my HP Microserver with onboard SATA is the culprit. I
set up two other NFS servers (all Debian 6, kernel 2.6.38):
HP Z800 Xeon workstation
Intel Corporation 82801 SATA RAID Controller (operating as AHCI)
VB0250EAVER (250GB 7200rpm)
Lenovo Thinkpad X220
Intel Corporation Cougar Point 6 port SATA AHCI Controller (rev 04)
SSDSA2BW160G3L (160GB SSD)
Both the Z800 and X220 run as NFSv3 servers
Each one has a fresh 10GB logical volume formatted ext4,
mount options: barrier=1,data=ordered
write cache (hdparm -W 1): enabled
Results:
NFS client: X220
NFS server: Z800 (regular disk)
iostat reports about 1,000kbytes/sec when unpacking the tarball
This is just as slow as the original NFS server
NFS client: Z800
NFS server: X220 (SSD disk)
iostat reports about 22,000kbytes/sec when unpacking the tarball
It seems that buying a pair of SSDs for my HP MicroServer will let me
use NFS `sync' and enjoy healthy performance - 20x faster.
However, is there really no other way to get more speed out of NFS when
using the `sync' option?
next prev parent reply other threads:[~2012-05-08 12:07 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-05-06 3:00 extremely slow nfs when sync enabled Daniel Pocock
2012-05-06 18:23 ` Myklebust, Trond
2012-05-06 21:23 ` Daniel Pocock
2012-05-06 21:49 ` Myklebust, Trond
2012-05-06 22:12 ` Daniel Pocock
2012-05-06 22:12 ` Daniel Pocock
2012-05-06 22:42 ` Myklebust, Trond
2012-05-07 9:19 ` Daniel Pocock
2012-05-07 13:59 ` Daniel Pocock
2012-05-07 17:18 ` J. Bruce Fields
2012-05-08 12:06 ` Daniel Pocock [this message]
2012-05-08 12:45 ` J. Bruce Fields
2012-05-08 13:29 ` Myklebust, Trond
2012-05-08 13:43 ` Daniel Pocock
-- strict thread matches above, loose matches on Subject: below --
2012-05-06 9:26 Daniel Pocock
2012-05-06 11:03 ` Daniel Pocock
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4FA90C63.7000505@pocock.com.au \
--to=daniel@pocock.com.au \
--cc=Trond.Myklebust@netapp.com \
--cc=bfields@fieldses.org \
--cc=linux-nfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).