From: Mike Snitzer <snitzer@redhat.com>
To: Hannes Reinecke <hare@suse.de>
Cc: "axboe@kernel.dk" <axboe@kernel.dk>,
"keith.busch@intel.com" <keith.busch@intel.com>,
Sagi Grimberg <sagig@dev.mellanox.co.il>,
"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
Christoph Hellwig <hch@infradead.org>,
device-mapper development <dm-devel@redhat.com>,
"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
Bart Van Assche <bart.vanassche@sandisk.com>
Subject: Re: [RFC PATCH] dm: fix excessive dm-mq context switching
Date: Tue, 9 Feb 2016 09:55:47 -0500 [thread overview]
Message-ID: <20160209145547.GA21623@redhat.com> (raw)
In-Reply-To: <56B99A49.5050400@suse.de>
On Tue, Feb 09 2016 at 2:50am -0500,
Hannes Reinecke <hare@suse.de> wrote:
> On 02/07/2016 06:20 PM, Mike Snitzer wrote:
> > On Sun, Feb 07 2016 at 11:54am -0500,
> > Sagi Grimberg <sagig@dev.mellanox.co.il> wrote:
> >
> >>
> >>>> If so, can you check with e.g.
> >>>> perf record -ags -e LLC-load-misses sleep 10 && perf report whether this
> >>>> workload triggers perhaps lock contention ? What you need to look for in
> >>>> the perf output is whether any functions occupy more than 10% CPU time.
> >>>
> >>> I will, thanks for the tip!
> >>
> >> The perf report is very similar to the one that started this effort..
> >>
> >> I'm afraid we'll need to resolve the per-target m->lock in order
> >> to scale with NUMA...
> >
> > Could be. Just for testing, you can try the 2 topmost commits I've put
> > here (once applied both __multipath_map and multipath_busy won't have
> > _any_ locking.. again, very much test-only):
> >
> > http://git.kernel.org/cgit/linux/kernel/git/snitzer/linux.git/log/?h=devel2
> >
> So, I gave those patches a spin.
> Sad to say, they do _not_ resolve the issue fully.
>
> My testbed (2 paths per LUN, 40 CPUs, 4 cores) yields 505k IOPs with
> those patches.
That isn't a surprise. We knew the m->lock spinlock contention to be a
problem. And NUMA makes it even worse.
> Using a single path (without those patches, but still running
> multipath on top of that path) the same testbed yields 550k IOPs.
> Which very much smells like a lock contention ...
> We do get a slight improvement, though; without those patches I
> could only get about 350k IOPs. But still, I would somehow expect 2
> paths to be faster than just one ..
https://www.redhat.com/archives/dm-devel/2016-February/msg00036.html
hint hint...
next prev parent reply other threads:[~2016-02-09 14:55 UTC|newest]
Thread overview: 65+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <569CD4D6.2040908@dev.mellanox.co.il>
2016-01-19 10:37 ` dm-multipath low performance with blk-mq Sagi Grimberg
2016-01-19 22:45 ` Mike Snitzer
2016-01-25 21:40 ` Mike Snitzer
2016-01-25 23:37 ` Benjamin Marzinski
2016-01-26 13:29 ` Mike Snitzer
2016-01-26 14:01 ` Hannes Reinecke
2016-01-26 14:47 ` Mike Snitzer
2016-01-26 14:56 ` Christoph Hellwig
2016-01-26 15:27 ` Mike Snitzer
2016-01-26 15:57 ` Benjamin Marzinski
2016-01-27 11:14 ` Sagi Grimberg
2016-01-27 17:48 ` Mike Snitzer
2016-01-27 17:51 ` Jens Axboe
2016-01-27 18:16 ` Mike Snitzer
2016-01-27 18:26 ` Jens Axboe
2016-01-27 19:14 ` Mike Snitzer
2016-01-27 19:50 ` Jens Axboe
2016-01-27 17:56 ` Sagi Grimberg
2016-01-27 18:42 ` Mike Snitzer
2016-01-27 19:49 ` Jens Axboe
2016-01-27 20:45 ` Mike Snitzer
2016-01-29 23:35 ` Mike Snitzer
2016-01-30 8:52 ` Hannes Reinecke
2016-01-30 19:12 ` Mike Snitzer
2016-02-01 6:46 ` Hannes Reinecke
2016-02-03 18:04 ` Mike Snitzer
2016-02-03 18:24 ` Mike Snitzer
2016-02-03 19:22 ` Mike Snitzer
2016-02-04 6:54 ` Hannes Reinecke
2016-02-04 13:54 ` Mike Snitzer
2016-02-04 13:58 ` Hannes Reinecke
2016-02-04 14:09 ` Mike Snitzer
2016-02-04 14:32 ` Hannes Reinecke
2016-02-04 14:44 ` Mike Snitzer
2016-02-05 15:13 ` [RFC PATCH] dm: fix excessive dm-mq context switching Mike Snitzer
2016-02-05 18:05 ` Mike Snitzer
2016-02-05 19:19 ` Mike Snitzer
2016-02-07 15:41 ` Sagi Grimberg
2016-02-07 16:07 ` Mike Snitzer
2016-02-07 16:42 ` Sagi Grimberg
2016-02-07 16:37 ` Bart Van Assche
2016-02-07 16:43 ` Sagi Grimberg
2016-02-07 16:53 ` Mike Snitzer
2016-02-07 16:54 ` Sagi Grimberg
2016-02-07 17:20 ` Mike Snitzer
2016-02-08 12:21 ` Sagi Grimberg
2016-02-08 14:34 ` Mike Snitzer
2016-02-09 7:50 ` Hannes Reinecke
2016-02-09 14:55 ` Mike Snitzer [this message]
2016-02-09 15:32 ` Hannes Reinecke
2016-02-10 0:45 ` Mike Snitzer
2016-02-11 1:50 ` RCU-ified dm-mpath for testing/review Mike Snitzer
2016-02-11 3:35 ` Mike Snitzer
2016-02-11 15:34 ` Mike Snitzer
2016-02-12 15:18 ` Hannes Reinecke
2016-02-12 15:26 ` Mike Snitzer
2016-02-12 16:04 ` Hannes Reinecke
2016-02-12 18:00 ` Mike Snitzer
2016-02-15 6:47 ` Hannes Reinecke
2016-01-26 1:49 ` dm-multipath low performance with blk-mq Benjamin Marzinski
2016-01-26 16:03 ` Mike Snitzer
2016-01-26 16:44 ` Christoph Hellwig
2016-01-27 2:09 ` Mike Snitzer
2016-01-27 11:10 ` Sagi Grimberg
2016-01-26 21:40 ` Benjamin Marzinski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160209145547.GA21623@redhat.com \
--to=snitzer@redhat.com \
--cc=axboe@kernel.dk \
--cc=bart.vanassche@sandisk.com \
--cc=dm-devel@redhat.com \
--cc=hare@suse.de \
--cc=hch@infradead.org \
--cc=keith.busch@intel.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=sagig@dev.mellanox.co.il \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).