From: Stan Hoeppner <stan@hardwarefreak.com>
To: xfs@oss.sgi.com
Subject: Re: Tuning XFS for real time audio on a laptop with encrypted LVM
Date: Fri, 21 May 2010 01:25:42 -0500 [thread overview]
Message-ID: <4BF62766.8070105@hardwarefreak.com> (raw)
In-Reply-To: <20100521041415.GW8120@dastard>
Dave Chinner put forth on 5/20/2010 11:14 PM:
> I only ever use the noop scheduler with XFS these days. CFQ has been
> a steaming pile of ever changing regressions for the past 4 or 5
> kernel releases, so i stopped using it. Besides, XFS is often 10-15%
> faster on no-op for the same workload, anyway...
IIRC the elevator sits below the FS in the stack, and has a tighter
relationship to the block device driver and physical storage subsystem than
to the FS. I have one box with a 7.2K 500GB WD drive and a sata_sil
controller that doesn't support NCQ. Without NCQ due to no controller
support or ATA_horkage_NCQ blacklisted drives, the deadline and anticipatory
(now removed from the kernel IIRC) elevators yield vastly superior
performance under load compared to CFQ or noop.
Noop fits well with good hardware RAID, either local machine PCI/x/e RAID
card or straight FC HBA talking to a SAN array controller. CFQ just gets in
the way with good hardware. In some testing I've done with FC HBAs and
target LUNs on IBM FasTt and Nexsan SAN arrays, deadline has shown a tiny
advantage over noop with a few synthetic tests. This testing was performed
on SLED 10 and Debian Etch guests atop VMWare ESX 3 at night on weekends
when load across the ESX blade farm was near zero, but it was still done in
a virtual environment. On bare hardware, I'm not sure one would get the
same results. Anyway, the deadline elevator gave such little advantage over
noop, I'd still recommend noop on good hardware due to zero CPU overhead.
Deadline has a few fancy tricks so it will always eat more CPU, even though
it's a modest amount.
I'd sum the elevator choice up this way: If you have a good storage
hardware and driver combo such as fast SATA disks with good NCQ, or just
about any SCSI, SAS, RAID, or SAN setup, go with noop. For lesser
hardware/drivers, use deadline (i.e. lacking or crappy NCQ, or on laptops
due to the slow 4200/5400 rpm drives, even if they do have good NCQ).
I agree with Dave that CFQ isn't all that great, and in my testing it's even
worse when used with Linux guests on ESX than it is on bare metal.
Caveat: I'm no expert, and I don't do storage subsystem performance testing
all day long. I'm just reporting my first hand experience. YMMV and all
the normal disclaimers apply.
--
Stan
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2010-05-21 6:23 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-05-21 2:16 Tuning XFS for real time audio on a laptop with encrypted LVM Pedro Ribeiro
2010-05-21 4:14 ` Dave Chinner
2010-05-21 6:25 ` Stan Hoeppner [this message]
2010-05-21 11:29 ` Pedro Ribeiro
2010-05-21 13:45 ` Stan Hoeppner
2010-05-22 12:21 ` Pedro Ribeiro
2010-05-22 22:13 ` Stan Hoeppner
2010-05-22 13:22 ` Eric Sandeen
2010-05-22 13:54 ` Pedro Ribeiro
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4BF62766.8070105@hardwarefreak.com \
--to=stan@hardwarefreak.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox