From mboxrd@z Thu Jan 1 00:00:00 1970 From: Mike Snitzer Subject: Re: dm-multipath low performance with blk-mq Date: Thu, 4 Feb 2016 09:44:12 -0500 Message-ID: <20160204144412.GA18529@redhat.com> References: <56AC79D0.5060104@suse.de> <20160130191238.GA18686@redhat.com> <56AEFF63.7050606@suse.de> <20160203180406.GA11591@redhat.com> <20160203182423.GA12913@redhat.com> <56B2F5BC.1010700@suse.de> <20160204135420.GA18227@redhat.com> <56B358EE.6000007@suse.de> <20160204140959.GA18328@redhat.com> <56B360E3.2020806@suse.de> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-Disposition: inline In-Reply-To: <56B360E3.2020806@suse.de> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: Hannes Reinecke Cc: axboe@kernel.dk, Christoph Hellwig , Sagi Grimberg , "linux-nvme@lists.infradead.org" , "keith.busch@intel.com" , device-mapper development , linux-block@vger.kernel.org, Bart Van Assche List-Id: dm-devel.ids On Thu, Feb 04 2016 at 9:32am -0500, Hannes Reinecke wrote: > On 02/04/2016 03:09 PM, Mike Snitzer wrote: > > On Thu, Feb 04 2016 at 8:58am -0500, > > Hannes Reinecke wrote: > > > >> On 02/04/2016 02:54 PM, Mike Snitzer wrote: > >>> On Thu, Feb 04 2016 at 1:54am -0500, > >>> Hannes Reinecke wrote: > >>> > >> [ .. ] > >>>> But anyway, I'll be looking at your patches. > >>> > >>> Thanks, sadly none of the patches are going to fix the performance > >>> problems but I do think they are a step forward. > >>> > >> Hmm. I've got a slew of patches converting dm-mpath to use atomic_t > >> and bitops; with that we should be able to move to rcu for path > >> lookup and do away with most of the locking. > >> Quite raw, though; drop me a mail if you're interested. > > > > Hmm, ok I just switched m->lock from spinlock_t to rwlock_t, see: > > https://git.kernel.org/cgit/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=dm-4.6&id=a5226e23a6958ac9b7ade13a983604c43d232c7d > > > > So any patch you'd have in this area would need rebasing. I'll gladly > > look at what you have (even if it isn't rebased). So yes please share. > > > > (it could be that there isn't a big enough win associated with switching > > to rwlock_t -- that we could get away without doing that particular > > churn.. open to that if you think rwlock_t pointless given we'll take > > the write lock after repeat_count drops to 0) > > > personally, I don't think the switching to a rwlock_t will buy us > anything; for a decent performance you have to set rq_min_io to 1 > anyway, thereby defeating the purpose of the rwlock. OK, I'll drop the rwlock_t commit. > My thinking was rather a different direction: > Move the crucial bits of the multipath structure to atomics, and > split off the path selection code into one bit for selecting the > path within a path group, and another which switches the path groups. > When we do that we could use rcus for the paths themselves, and > would only need to take the spinlock if we need to switch path > groups. Which should be okay as switching path groups is > (potentially) a rather slow operation anyway. If you were to put focus to dusting your work off and helping me get switched over to RCU I'd be very thankful.