public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Jan Kasprzak <kas@fi.muni.cz>
To: linux-kernel@vger.kernel.org
Subject: 3ware disk latency?
Date: Mon, 10 Jul 2006 16:13:15 +0200	[thread overview]
Message-ID: <20060710141315.GA5753@fi.muni.cz> (raw)

	Hi all,

I have upgraded my machine from 3ware 7508 with 8x 250GB ATA drives
to 3ware 9550SX-8LP with 8x 500GB SATA-II drives, and I have found that
while the overall throughput of the disks is higher than before,
the latency - measured as the number of messages per time unit Qmail can
deliver - is far worse than before[1]. I have tried to use deadline
and cfq schedulers, but it made no visible speedup.

	I have found the following two years old mail by Jens Axboe:
http://www.uwsg.iu.edu/hypermail/linux/kernel/0409.0/1330.html
- the problem at that time was that the device had its internal
command queue deeper than the nr_requests of the block device layer,
so the I/O scheduler couldn't do anything.  I was surprised
that the situation is still the same: I have

/sys/block/sd[a-h]/queue/nr_requests == 128, and
/sys/devices/pci0000:00/0000:00:0b.0/0000:01:03.0/host0/target*/*/queue_depth == 254

I have verified that even on older 3ware drivers (7508) the situation
is the same. On the newer hardware it is probably more visible, because
the controller actually has bigger on-board cache (128MB vs. 32MB, I think).

	I have tried to lower the /sys/devices/.../queue_depth to 4
and disable the NCQ, and the latency got a bit better. But it is still
nowhere near to where it was on the older HW.

	Does anybody experience the similar latency problems with
3ware 95xx controllers? Thanks,

-Yenya

[1] the old configuration peaked at ~2k-4k messages per 5 minutes,
	with 1k-2k messages/5min being pretty normal, while the new one
	has maximum throughput of 1k messages per 5 minutes, but the normal
	speed is much lower - some low hundreds messages per 5 minutes.
	The new system runs about the same load as the previous one,
	and the layout of disks is also the same (just the newer drives
	are bigger, of course).


-- 
| Jan "Yenya" Kasprzak  <kas at {fi.muni.cz - work | yenya.net - private}> |
| GPG: ID 1024/D3498839      Fingerprint 0D99A7FB206605D7 8B35FCDE05B18A5E |
| http://www.fi.muni.cz/~kas/    Journal: http://www.fi.muni.cz/~kas/blog/ |
> I will never go to meetings again because I think  face to face meetings <
> are the biggest waste of time you can ever have.        --Linus Torvalds <

             reply	other threads:[~2006-07-10 14:13 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-07-10 14:13 Jan Kasprzak [this message]
2006-07-26 16:52 ` 3ware disk latency? dean gaudet
2006-07-26 20:37   ` Alan Cox
2006-07-26 21:53     ` dean gaudet
2006-07-26 22:07       ` adam radford
2006-07-26 22:10         ` Justin Piszcz
2006-07-26 22:48         ` Jeff V. Merkey
2006-07-26 22:20           ` adam radford
2006-07-27  3:00         ` dean gaudet
2006-07-27 13:42           ` Jan Kasprzak
2006-07-27 12:11       ` Jan Kasprzak
2006-07-27 18:30         ` David Lang
  -- strict thread matches above, loose matches on Subject: below --
2006-08-02 11:48 Dieter Stüken

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20060710141315.GA5753@fi.muni.cz \
    --to=kas@fi.muni.cz \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox