linux-ide.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Tejun Heo <htejun@gmail.com>
To: Michael Tokarev <mjt@tls.msk.ru>
Cc: Richard Scobie <r.scobie@clear.net.nz>,
	linux-ide@vger.kernel.org, Jens Axboe <jens.axboe@oracle.com>
Subject: Re: SAS v SATA interface performance
Date: Mon, 10 Dec 2007 16:33:14 +0900	[thread overview]
Message-ID: <475CEBBA.3050409@gmail.com> (raw)
In-Reply-To: <47507F9A.3080109@msgid.tls.msk.ru>

(cc'ing Jens as it contains some discussion about IO scheduling)

Michael Tokarev wrote:
> Richard Scobie wrote:
>> If one disregards the rotational speed and access time advantage that
>> SAS drives have over SATA, does the SAS interface offer any performance
>> advantage?
> 
> It's a very good question, to which I wish I have an answer myself ;)
> Since I never tried actual SAS controllers with SAS drives, I'll
> reply from ol'good SCSI vs SATA perspective.

Purely from transport layer protocol perspective, SATA has slightly
shorter latency thanks to its simplicity but compared to actual IO
latency, this is negligible and if you throw NCQ and TCQ into play, this
theoretical advantage becomes completely negligible.

> They says that modern SATA drives has NCQ, which is "more
> advanced" than ol'good TCQ used in SCSI (and SAS) drives.
> I've no idea what's "advanced" in it, except of that it
> just does not work.  There's almost no difference with
> NCQ turned on or off, and in many cases turning NCQ ON
> actually REDUCES performance.

NCQ is not more advanced than SCSI TCQ.  NCQ is "native" and "advanced"
compared to old IDE style bus-releasing queueing support which was one
ugly beast which no one really supported well.  The only example I can
remember which actually worked was first gen raptors paired with
specific controller with custom driver on windows.

If you compare protocol to protocol, NCQ should be able to perform as
good as TCQ unless you're talking about monster storage enclosure device
which can have a lot of spindles behind it.  Again, NCQ has lower
overhead but bus latency / overhead don't really matter.

However, that is not to say SATA drives with NCQ support perform as good
as SCSI drives with TCQ support.  SCSI drivers are simply faster and
tend to have better firmware.  There is nothing much operating system
can do about it.

There's one thing we can do to improve the situation tho.  Several
drives including raptors and 7200.11s suffer serious performance hit if
sequential transfer is performed by multiple NCQ commands.  My 7200.11
can do > 100MB/s if non-NCQ command is used or only upto two NCQ
commands are issued; however, if all 31 (maximum currently supported by
libata) are used, the transfer rate drops to miserable 70MB/s.

It seems that what we need to do is not issuing too many commands to one
sequential stream.  In fact, there isn't much to gain by issuing more
than two commands to one sequential stream.

Both raptors and 7200.11 perform noticeably better on random workload
with NCQ enabled.  So, it's about time to update IO schedulers
accordingly, it seems.

Thanks.

-- 
tejun

  parent reply	other threads:[~2007-12-10  7:33 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-11-30 19:19 SAS v SATA interface performance Richard Scobie
2007-11-30 21:24 ` Michael Tokarev
2007-11-30 23:17   ` Alan Cox
2007-12-01  7:43     ` Richard Scobie
2007-12-01 14:37       ` Greg Freemyer
2007-12-01 19:19         ` Richard Scobie
2007-12-01 20:01           ` Mark Lord
2007-12-01 20:40             ` Jeff Garzik
2007-12-01 23:55               ` Richard Scobie
2007-12-02  3:45                 ` Mark Lord
2007-12-02  3:49                   ` Mark Lord
2007-12-10  7:33   ` Tejun Heo [this message]
2007-12-10 14:36     ` Jens Axboe
2007-12-10 16:28       ` Mark Lord
2007-12-10 14:50     ` James Bottomley
2007-12-10 16:32     ` Mark Lord
  -- strict thread matches above, loose matches on Subject: below --
2007-12-01  0:04 Richard Scobie
2007-12-01  0:17 ` Alan Cox
2007-12-01  3:06   ` Mark Lord
2007-12-10  7:15     ` Tejun Heo
2007-12-10 16:23       ` Mark Lord

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=475CEBBA.3050409@gmail.com \
    --to=htejun@gmail.com \
    --cc=jens.axboe@oracle.com \
    --cc=linux-ide@vger.kernel.org \
    --cc=mjt@tls.msk.ru \
    --cc=r.scobie@clear.net.nz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).