From: Jens Axboe <axboe@kernel.dk>
To: Ming Lei <tom.leiming@gmail.com>
Cc: Brian King <brking@linux.vnet.ibm.com>,
linux-block <linux-block@vger.kernel.org>,
"open list:DEVICE-MAPPER (LVM)" <dm-devel@redhat.com>,
Mike Snitzer <snitzer@redhat.com>,
Alasdair Kergon <agk@redhat.com>
Subject: Re: [PATCH 1/1] block: Convert hd_struct in_flight from atomic to percpu
Date: Thu, 29 Jun 2017 09:58:50 -0600 [thread overview]
Message-ID: <7f0a852e-5f90-4c63-9a43-a4180557530c@kernel.dk> (raw)
In-Reply-To: <CACVXFVMG=gD1Dq2SRKxhfPS33mzCC_dkSpVoYskaLE7PXV7xGQ@mail.gmail.com>
On 06/29/2017 02:40 AM, Ming Lei wrote:
> On Thu, Jun 29, 2017 at 5:49 AM, Jens Axboe <axboe@kernel.dk> wrote:
>> On 06/28/2017 03:12 PM, Brian King wrote:
>>> This patch converts the in_flight counter in struct hd_struct from a
>>> pair of atomics to a pair of percpu counters. This eliminates a couple
>>> of atomics from the hot path. When running this on a Power system, to
>>> a single null_blk device with 80 submission queues, irq mode 0, with
>>> 80 fio jobs, I saw IOPs go from 1.5M IO/s to 11.4 IO/s.
>>
>> This has been done before, but I've never really liked it. The reason is
>> that it means that reading the part stat inflight count now has to
>> iterate over every possible CPU. Did you use partitions in your testing?
>> How many CPUs were configured? When I last tested this a few years ago
>> on even a quad core nehalem (which is notoriously shitty for cross-node
>> latencies), it was a net loss.
>
> One year ago, I saw null_blk's IOPS can be decreased to 10%
> of non-RQF_IO_STAT on a dual socket ARM64(each CPU has
> 96 cores, and dual numa nodes) too, the performance can be
> recovered basically if per numa-node counter is introduced and
> used in this case, but the patch was never posted out.
> If anyone is interested in that, I can rebase the patch on current
> block tree and post out. I guess the performance issue might be
> related with system cache coherency implementation more or less.
> This issue on ARM64 can be observed with the following userspace
> atomic counting test too:
>
> http://kernel.ubuntu.com/~ming/test/cache/
How well did the per-node thing work? Doesn't seem to me like it would
go far enough. And per CPU is too much. One potential improvement would
be to change the part_stat_read() to just loop online CPUs, instead of
all possible CPUs. When CPUs go on/offline, use that as the slow path to
ensure the stats are sane. Often there's a huge difference between
NR_CPUS configured and what the system has. As Brian states, RH ships
with 2048, while I doubt a lot of customers actually run that...
Outside of coming up with a more clever data structure that is fully
CPU topology aware, one thing that could work is just having X cache
line separated read/write inflight counters per node, where X is some
suitable value (like 4). That prevents us from having cross node
traffic, and it also keeps the cross cpu traffic fairly low. That should
provide a nice balance between cost of incrementing the inflight
counting, and the cost of looping for reading it.
And that brings me to the next part...
>> I do agree that we should do something about it, and it's one of those
>> items I've highlighted in talks about blk-mq on pending issues to fix
>> up. It's just not great as it currently stands, but I don't think per
>> CPU counters is the right way to fix it, at least not for the inflight
>> counter.
>
> Yeah, it won't be a issue for non-mq path, and for blk-mq path, maybe
> we can use some blk-mq knowledge(tagset?) to figure out the
> 'in_flight' counter. I thought about it before, but never got a
> perfect solution, and looks it is a bit hard, :-)
The tags are already a bit spread out, so it's worth a shot. That would
remove the need to do anything in the inc/dec path, as the tags already
do that. The inlight count could be easily retrieved with
sbitmap_weight(). The only issue here is that we need separate read and
write counters, and the weight would obviously only get us the total
count. But we can have a slower path for that, just iterate the tags and
count them. The fast path only cares about total count.
Let me try that out real quick.
--
Jens Axboe
next prev parent reply other threads:[~2017-06-29 15:58 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-06-28 21:12 [PATCH 1/1] block: Convert hd_struct in_flight from atomic to percpu Brian King
2017-06-28 21:49 ` Jens Axboe
2017-06-28 22:04 ` Brian King
2017-06-29 8:40 ` Ming Lei
2017-06-29 15:58 ` Jens Axboe [this message]
2017-06-29 16:00 ` Jens Axboe
2017-06-29 18:42 ` Jens Axboe
2017-06-30 1:20 ` Ming Lei
2017-06-30 2:17 ` Jens Axboe
2017-06-30 13:05 ` [dm-devel] " Brian King
2017-06-30 14:08 ` Jens Axboe
2017-06-30 18:33 ` Brian King
2017-06-30 23:23 ` Ming Lei
2017-06-30 23:26 ` Jens Axboe
2017-07-01 2:18 ` Brian King
2017-07-04 1:20 ` Ming Lei
2017-07-04 20:58 ` Brian King
2017-07-01 4:17 ` Jens Axboe
2017-07-01 4:59 ` Jens Axboe
2017-07-01 16:43 ` Jens Axboe
2017-07-04 20:55 ` Brian King
2017-07-04 21:57 ` Jens Axboe
2017-06-29 16:25 ` Ming Lei
2017-06-29 17:31 ` Brian King
2017-06-30 1:08 ` Ming Lei
2017-06-28 21:54 ` Jens Axboe
2017-06-28 21:59 ` Jens Axboe
2017-06-28 22:07 ` [dm-devel] " Brian King
2017-06-28 22:19 ` Jens Axboe
2017-06-29 12:59 ` Brian King
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7f0a852e-5f90-4c63-9a43-a4180557530c@kernel.dk \
--to=axboe@kernel.dk \
--cc=agk@redhat.com \
--cc=brking@linux.vnet.ibm.com \
--cc=dm-devel@redhat.com \
--cc=linux-block@vger.kernel.org \
--cc=snitzer@redhat.com \
--cc=tom.leiming@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox