From: "Alexandre Depoutovitch" <adepoutovitch-pghWNbHTmq7QT0dZR+AlfA@public.gmane.org>
To: <linux-nfs-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
<linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>
Subject: RE: [PATCH RFC v2] Performing direct I/O on sector-aligned requests
Date: Tue, 8 May 2012 12:51:28 -0700 (PDT) [thread overview]
Message-ID: <a4ab669d.00001670.0000002d@aldep-VC.vmware.com> (raw)
In-Reply-To: <1508773761.4854678.1335731939770.JavaMail.root-uUpdlAIx0AHkdGAVcyJ/gDSPNL9O62GLZeezCHUQhQ4@public.gmane.org>
----- Original Message -----
> From: "J. Bruce Fields" <bfields-uC3wQj2KruNg9hUCZPvPmw@public.gmane.org>
> To: "Alexandre Depoutovitch" <adepoutovitch-pghWNbHTmq7QT0dZR+AlfA@public.gmane.org>
> Sent: Friday, April 27, 2012 4:51:20 PM
> Subject: Re: About Direct I/O
>
> On Fri, Apr 27, 2012 at 01:22:46PM -0700, Alexandre Depoutovitch
> wrote:
> >
> > The tests have been done on a hardware RAID10 array with 8 10K 450GB
> > SAS drives. Raid adapter was HP P410i.
>
> It might be worth also testing with a single drive if you want to see
> the worst case for synchronous writes. (That adapater may have a
> battery-backed cache that lets it respond to writes immediately?)
Yes, the adapter has battery backed cache (1GB), and you are right, it is
the main reason for significant improvement when doing direct I/O. Sync
random writes happen order of magnitude faster than reads. I also tested
Direct I/O on a cheap Western Digital 7.2K SATA drive (WD10EALX) on an Intel
82801 SATA controller. There was no performance gain with direct I/O because
write speed was in fact 1.5 slower than read speed. However, there was no
performance degradation either, whether direct of buffered I/O was used (in
sync mode).
So looks like that Direct I/O for NFS is beneficial for random, f/s
unaligned, synchronous writes on adapters with NVRAM. In other cases it can
be turned on/off either automatically, based on alignment and O_SYNC flag,
or manually, based on hardware characteristics.
Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
next prev parent reply other threads:[~2012-05-08 19:51 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <9c0b78d1.00001bd8.00000533@aldep-VC.vmware.com>
2012-04-29 21:03 ` [PATCH RFC v2] Performing direct I/O on sector-aligned requests Alexandre Depoutovitch
2012-04-30 19:56 ` Matthew Wilcox
2012-04-30 21:39 ` Alexandre Depoutovitch
[not found] ` <1508773761.4854678.1335731939770.JavaMail.root-uUpdlAIx0AHkdGAVcyJ/gDSPNL9O62GLZeezCHUQhQ4@public.gmane.org>
2012-04-30 18:22 ` Jeff Moyer
2012-05-15 18:50 ` Alexandre Depoutovitch
2012-05-08 19:51 ` Alexandre Depoutovitch [this message]
2012-05-11 18:36 ` Dean
[not found] ` <4FAD5C44.50407-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2012-05-15 17:44 ` Alexandre Depoutovitch
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a4ab669d.00001670.0000002d@aldep-VC.vmware.com \
--to=adepoutovitch-pghwnbhtmq7qt0dzr+alfa@public.gmane.org \
--cc=linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
--cc=linux-nfs-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).