From: Stan Hoeppner <stan@hardwarefreak.com>
To: Linda Walsh <xfs@tlinx.org>
Cc: xfs-oss <xfs@oss.sgi.com>
Subject: Re: does having ~Ncore+1? kworkers flushing XFS to 1 disk improve throughput?
Date: Sat, 24 Aug 2013 14:09:40 -0500 [thread overview]
Message-ID: <521904F4.90208@hardwarefreak.com> (raw)
In-Reply-To: <5218EADD.4000704@tlinx.org>
On 8/24/2013 12:18 PM, Linda Walsh wrote:
>
>
> Stan Hoeppner wrote:
>> On 8/23/2013 9:33 PM, Linda Walsh wrote:
>>
>>> So what are all the kworkers doing and does having 6 of them do
>>> things at the same time really help disk-throughput?
>>>
>>> Seems like they would conflict w/each other, cause disk
>>> contention, and extra fragmentation as they do things? If they
>>> were all writing to separate disks, that would make sense, but do
>>> that many kworker threads need to be finishing out disk I/O on
>>> 1 disk?
>>
>> https://raw.github.com/torvalds/linux/master/Documentation/workqueue.txt
> ----
>
> Thanks for the pointer.
>
> I see ways to limit #workers/cpu if they were hogging too much cpu,
> which isn't the problem.. My concern is that the work they are
> doing is all writing info back to the same physical disk -- and that
> while >1 writer can improve throughput, generally, it would be best
> if the pending I/O was sorted in disk order and written out using
> the elevator algorithm. I.e. I can't imagine that it takes 6-8
> processes (mostly limiting themselves to 1 NUMA node) to keep the
> elevator filled?
You're making a number of incorrect assumptions here. The work queues
are generic, which is clearly spelled out in the document above. The
kworker threads are just that, kernel threads, not processes as you
assume above. XFS is not the only subsystem that uses them. Any
subsystem or driver can use work queues. You can't tell what's
executing within a kworker thread from top or ps output. You must look
at the stack trace.
The work you are seeing in those 7 or 8 kworker threads is not all
parallel XFS work. Your block device driver, whether libata, SCSI, or
proprietary RAID card driver, is placing work in these queues as well.
The work queues are not limited to filesystems and block device drivers.
Any device driver or kernel subsystem can use work queues.
Nothing bypasses the elevator; sectors are still sorted. But keep in
mind if you're using a hardware RAID controller -it- does the final
sorting of writeback anyway, so this is a non issue. I can't recall if
you use md/RAID or an LSI RAID controller. ISTR you stating LSI
sometime back, but my memory may be faulty here.
So in a nutshell, whatever performance issue you're having, if you
indeed have an issue, isn't caused by work queues or the number of
kworker threads on your system, per CPU, or otherwise. You need to look
elsewhere for the bottleneck. Given it's lightning fast up to the point
buffers start flushing to disk it's pretty clear your spindles simply
can't keep up.
--
Stan
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2013-08-24 19:09 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-08-24 2:33 does having ~Ncore+1? kworkers flushing XFS to 1 disk improve throughput? Linda Walsh
2013-08-24 4:07 ` Stan Hoeppner
2013-08-24 17:18 ` Linda Walsh
2013-08-24 19:09 ` Stan Hoeppner [this message]
2013-08-24 23:22 ` Linda Walsh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=521904F4.90208@hardwarefreak.com \
--to=stan@hardwarefreak.com \
--cc=xfs@oss.sgi.com \
--cc=xfs@tlinx.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox