From: Jens Axboe <jens.axboe@oracle.com>
To: Tejun Heo <htejun@gmail.com>
Cc: Michael Tokarev <mjt@tls.msk.ru>,
Richard Scobie <r.scobie@clear.net.nz>,
linux-ide@vger.kernel.org
Subject: Re: SAS v SATA interface performance
Date: Mon, 10 Dec 2007 15:36:57 +0100 [thread overview]
Message-ID: <20071210143656.GD9227@kernel.dk> (raw)
In-Reply-To: <475CEBBA.3050409@gmail.com>
On Mon, Dec 10 2007, Tejun Heo wrote:
> There's one thing we can do to improve the situation tho. Several
> drives including raptors and 7200.11s suffer serious performance hit if
> sequential transfer is performed by multiple NCQ commands. My 7200.11
> can do > 100MB/s if non-NCQ command is used or only upto two NCQ
> commands are issued; however, if all 31 (maximum currently supported by
> libata) are used, the transfer rate drops to miserable 70MB/s.
>
> It seems that what we need to do is not issuing too many commands to one
> sequential stream. In fact, there isn't much to gain by issuing more
> than two commands to one sequential stream.
Well... CFQ wont go to deep queue depths across processes if they are
doing streaming IO, but it wont stop a single process from doing so. I'd
like to know what real life process would issue a streaming IO in some
async manner as to get 31 pending commands sequentially? Not very likely
:-)
So I'd consider your case above a microbenchmark results. I'd also claim
that the firmware is very crappy, if it performs like described.
There's another possibility as well - that the queueing by the drive
generates a worse issue IO pattern, and that is why the performance
drops. Did you check with blktrace what the generated IO looks like?
> Both raptors and 7200.11 perform noticeably better on random workload
> with NCQ enabled. So, it's about time to update IO schedulers
> accordingly, it seems.
Definitely. Again microbenchmarks are able to show 30-40% improvements
when I last tested. That's a pure random workload though, again not
something that you would see in real life.
I tend to always run with a depth around 4 here. It seems to be a good
value, you get some benefits from NCQ but you don't allow the drive
firmware to screw you over.
--
Jens Axboe
next prev parent reply other threads:[~2007-12-10 14:40 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-11-30 19:19 SAS v SATA interface performance Richard Scobie
2007-11-30 21:24 ` Michael Tokarev
2007-11-30 23:17 ` Alan Cox
2007-12-01 7:43 ` Richard Scobie
2007-12-01 14:37 ` Greg Freemyer
2007-12-01 19:19 ` Richard Scobie
2007-12-01 20:01 ` Mark Lord
2007-12-01 20:40 ` Jeff Garzik
2007-12-01 23:55 ` Richard Scobie
2007-12-02 3:45 ` Mark Lord
2007-12-02 3:49 ` Mark Lord
2007-12-10 7:33 ` Tejun Heo
2007-12-10 14:36 ` Jens Axboe [this message]
2007-12-10 16:28 ` Mark Lord
2007-12-10 14:50 ` James Bottomley
2007-12-10 16:32 ` Mark Lord
-- strict thread matches above, loose matches on Subject: below --
2007-12-01 0:04 Richard Scobie
2007-12-01 0:17 ` Alan Cox
2007-12-01 3:06 ` Mark Lord
2007-12-10 7:15 ` Tejun Heo
2007-12-10 16:23 ` Mark Lord
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20071210143656.GD9227@kernel.dk \
--to=jens.axboe@oracle.com \
--cc=htejun@gmail.com \
--cc=linux-ide@vger.kernel.org \
--cc=mjt@tls.msk.ru \
--cc=r.scobie@clear.net.nz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).