From: Michael Monnerie <michael.monnerie@is.it-management.at>
To: xfs@oss.sgi.com
Subject: Re: XFS hangs and freezes with LSI 9265-8i controller on high i/o
Date: Fri, 15 Jun 2012 11:52:17 +0200 [thread overview]
Message-ID: <47854255.KfXFdqTbOZ@saturn> (raw)
In-Reply-To: <20120615001602.GF7339@dastard>
[-- Attachment #1.1: Type: text/plain, Size: 1709 bytes --]
Am Freitag, 15. Juni 2012, 10:16:02 schrieb Dave Chinner:
> So, the average service time for an IO is 10-16ms, which is a seek
> per IO. You're doing primarily 128k read IOs, and maybe one or 2
> writes a second. You have a very deep request queue: > 512 requests.
> Have you tuned /sys/block/sda/queue/nr_requests up from the default
> of 128? This is going to be one of the causes of your problems - you
> have 511 oustanding write requests, and only one read at a time.
> Reduce the ioscehduer queue depth, and potentially also the device
> CTQ depth.
Dave, I'm puzzled by this. I'd believe that a higher #req. would help
the block layer to resort I/O in the elevator, and therefore help to
gain throughput. Why would 128 be better than 512 here?
And maybe Matthew could profit from limiting the vm.dirty_bytes, I've
seen when this value is too high the server stucks on lots of writes,
for streaming it's better to have this smaller so the disk writes can
keep up and delays are not too long.
> Oh, I just noticed you are might be using CFQ (it's the default in
> dmesg). Don't - CFQ is highly unsuited for hardware RAID - it's
> hueristically tuned to work well on sngle SATA drives. Use deadline,
> or preferably for hardware RAID, noop.
Wouldn't deadline be better with a higher rq_qu size? As I understand
it, noop only groups adjacent I/Os together, while deadline does a bit
more and should be able to get bigger adjacent I/O areas because it
waits a bit longer before a flush.
--
mit freundlichen Grüssen,
Michael Monnerie, Ing. BSc
it-management Internet Services: Protéger
http://proteger.at [gesprochen: Prot-e-schee]
Tel: +43 660 / 415 6531
[-- Attachment #1.2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2012-06-15 9:52 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-06-11 21:37 XFS hangs and freezes with LSI 9265-8i controller on high i/o Matthew Whittaker-Williams
2012-06-12 1:18 ` Dave Chinner
2012-06-12 15:56 ` Matthew Whittaker-Williams
2012-06-12 17:40 ` Matthew Whittaker-Williams
2012-06-13 0:12 ` Stan Hoeppner
2012-06-13 1:19 ` Dave Chinner
2012-06-13 3:56 ` Stan Hoeppner
2012-06-13 8:54 ` Matthew Whittaker-Williams
2012-06-13 11:59 ` Andre Noll
2012-06-13 12:13 ` Michael Monnerie
2012-06-13 16:12 ` Stan Hoeppner
2012-06-14 7:31 ` Michael Monnerie
2012-06-14 0:04 ` Dave Chinner
2012-06-14 14:31 ` Matthew Whittaker-Williams
2012-06-15 0:16 ` Dave Chinner
2012-06-15 9:52 ` Michael Monnerie [this message]
2012-06-15 12:29 ` Dave Chinner
2012-06-15 11:25 ` Bernd Schubert
2012-06-15 12:30 ` Dave Chinner
2012-06-15 14:22 ` Bernd Schubert
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=47854255.KfXFdqTbOZ@saturn \
--to=michael.monnerie@is.it-management.at \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox